00:00:00.001 Started by upstream project "autotest-per-patch" build number 127169 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 24316 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.115 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.223 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.223 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/10/24310/6 # timeout=5 00:00:05.030 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.042 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.055 Checking out Revision 372f1a46acd6f697d572411a452deafc9650d88b (FETCH_HEAD) 00:00:05.055 > git config core.sparsecheckout # timeout=10 00:00:05.066 > git read-tree -mu HEAD # timeout=10 00:00:05.083 > git checkout -f 372f1a46acd6f697d572411a452deafc9650d88b # timeout=5 00:00:05.107 Commit message: "jenkins/autotest: remove redundant RAID flags" 00:00:05.108 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:05.235 [Pipeline] Start of Pipeline 00:00:05.249 [Pipeline] library 00:00:05.251 Loading library shm_lib@master 00:00:05.251 Library shm_lib@master is cached. Copying from home. 00:00:05.269 [Pipeline] node 00:00:20.271 Still waiting to schedule task 00:00:20.271 Waiting for next available executor on ‘vagrant-vm-host’ 00:10:20.025 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_2 00:10:20.027 [Pipeline] { 00:10:20.041 [Pipeline] catchError 00:10:20.042 [Pipeline] { 00:10:20.055 [Pipeline] wrap 00:10:20.063 [Pipeline] { 00:10:20.070 [Pipeline] stage 00:10:20.071 [Pipeline] { (Prologue) 00:10:20.086 [Pipeline] echo 00:10:20.087 Node: VM-host-SM9 00:10:20.091 [Pipeline] cleanWs 00:10:20.099 [WS-CLEANUP] Deleting project workspace... 00:10:20.099 [WS-CLEANUP] Deferred wipeout is used... 00:10:20.104 [WS-CLEANUP] done 00:10:20.259 [Pipeline] setCustomBuildProperty 00:10:20.346 [Pipeline] httpRequest 00:10:20.376 [Pipeline] echo 00:10:20.377 Sorcerer 10.211.164.101 is alive 00:10:20.385 [Pipeline] httpRequest 00:10:20.389 HttpMethod: GET 00:10:20.390 URL: http://10.211.164.101/packages/jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:10:20.390 Sending request to url: http://10.211.164.101/packages/jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:10:20.391 Response Code: HTTP/1.1 200 OK 00:10:20.392 Success: Status code 200 is in the accepted range: 200,404 00:10:20.392 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:10:20.535 [Pipeline] sh 00:10:20.812 + tar --no-same-owner -xf jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:10:20.830 [Pipeline] httpRequest 00:10:20.848 [Pipeline] echo 00:10:20.850 Sorcerer 10.211.164.101 is alive 00:10:20.860 [Pipeline] httpRequest 00:10:20.865 HttpMethod: GET 00:10:20.867 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:10:20.868 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:10:20.868 Response Code: HTTP/1.1 200 OK 00:10:20.870 Success: Status code 200 is in the accepted range: 200,404 00:10:20.870 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:10:24.192 [Pipeline] sh 00:10:24.466 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:10:27.755 [Pipeline] sh 00:10:28.063 + git -C spdk log --oneline -n5 00:10:28.063 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:10:28.063 fc2398dfa raid: clear base bdev configure_cb after executing 00:10:28.063 5558f3f50 raid: complete bdev_raid_create after sb is written 00:10:28.063 d005e023b raid: fix empty slot not updated in sb after resize 00:10:28.063 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:10:28.083 [Pipeline] writeFile 00:10:28.102 [Pipeline] sh 00:10:28.380 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:10:28.390 [Pipeline] sh 00:10:28.667 + cat autorun-spdk.conf 00:10:28.667 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:28.667 SPDK_TEST_NVME=1 00:10:28.667 SPDK_TEST_FTL=1 00:10:28.667 SPDK_TEST_ISAL=1 00:10:28.667 SPDK_RUN_ASAN=1 00:10:28.667 SPDK_RUN_UBSAN=1 00:10:28.667 SPDK_TEST_XNVME=1 00:10:28.667 SPDK_TEST_NVME_FDP=1 00:10:28.667 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:28.673 RUN_NIGHTLY=0 00:10:28.675 [Pipeline] } 00:10:28.690 [Pipeline] // stage 00:10:28.704 [Pipeline] stage 00:10:28.706 [Pipeline] { (Run VM) 00:10:28.720 [Pipeline] sh 00:10:28.996 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:10:28.996 + echo 'Start stage prepare_nvme.sh' 00:10:28.996 Start stage prepare_nvme.sh 00:10:28.996 + [[ -n 1 ]] 00:10:28.996 + disk_prefix=ex1 00:10:28.996 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:10:28.996 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:10:28.996 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:10:28.996 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:28.996 ++ SPDK_TEST_NVME=1 00:10:28.996 ++ SPDK_TEST_FTL=1 00:10:28.996 ++ SPDK_TEST_ISAL=1 00:10:28.996 ++ SPDK_RUN_ASAN=1 00:10:28.996 ++ SPDK_RUN_UBSAN=1 00:10:28.996 ++ SPDK_TEST_XNVME=1 00:10:28.996 ++ SPDK_TEST_NVME_FDP=1 00:10:28.996 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:28.996 ++ RUN_NIGHTLY=0 00:10:28.996 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:10:28.996 + nvme_files=() 00:10:28.996 + declare -A nvme_files 00:10:28.996 + backend_dir=/var/lib/libvirt/images/backends 00:10:28.996 + nvme_files['nvme.img']=5G 00:10:28.996 + nvme_files['nvme-cmb.img']=5G 00:10:28.996 + nvme_files['nvme-multi0.img']=4G 00:10:28.996 + nvme_files['nvme-multi1.img']=4G 00:10:28.996 + nvme_files['nvme-multi2.img']=4G 00:10:28.996 + nvme_files['nvme-openstack.img']=8G 00:10:28.996 + nvme_files['nvme-zns.img']=5G 00:10:28.996 + (( SPDK_TEST_NVME_PMR == 1 )) 00:10:28.996 + (( SPDK_TEST_FTL == 1 )) 00:10:28.996 + nvme_files["nvme-ftl.img"]=6G 00:10:28.996 + (( SPDK_TEST_NVME_FDP == 1 )) 00:10:28.996 + nvme_files["nvme-fdp.img"]=1G 00:10:28.996 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:10:28.996 + for nvme in "${!nvme_files[@]}" 00:10:28.996 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:10:28.996 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:10:28.996 + for nvme in "${!nvme_files[@]}" 00:10:28.996 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:10:29.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:10:29.932 + for nvme in "${!nvme_files[@]}" 00:10:29.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:10:29.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:10:29.932 + for nvme in "${!nvme_files[@]}" 00:10:29.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:10:29.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:10:29.932 + for nvme in "${!nvme_files[@]}" 00:10:29.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:10:29.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:10:29.932 + for nvme in "${!nvme_files[@]}" 00:10:29.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:10:30.190 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:10:30.190 + for nvme in "${!nvme_files[@]}" 00:10:30.190 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:10:30.190 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:10:30.190 + for nvme in "${!nvme_files[@]}" 00:10:30.190 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:10:30.449 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:10:30.449 + for nvme in "${!nvme_files[@]}" 00:10:30.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:10:31.015 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:10:31.015 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:10:31.015 + echo 'End stage prepare_nvme.sh' 00:10:31.015 End stage prepare_nvme.sh 00:10:31.028 [Pipeline] sh 00:10:31.309 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:10:31.309 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:10:31.309 00:10:31.309 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:10:31.309 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:10:31.309 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:10:31.309 HELP=0 00:10:31.309 DRY_RUN=0 00:10:31.309 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:10:31.309 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:10:31.309 NVME_AUTO_CREATE=0 00:10:31.309 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:10:31.309 NVME_CMB=,,,, 00:10:31.309 NVME_PMR=,,,, 00:10:31.309 NVME_ZNS=,,,, 00:10:31.309 NVME_MS=true,,,, 00:10:31.309 NVME_FDP=,,,on, 00:10:31.309 SPDK_VAGRANT_DISTRO=fedora38 00:10:31.309 SPDK_VAGRANT_VMCPU=10 00:10:31.309 SPDK_VAGRANT_VMRAM=12288 00:10:31.309 SPDK_VAGRANT_PROVIDER=libvirt 00:10:31.309 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:10:31.309 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:10:31.309 SPDK_OPENSTACK_NETWORK=0 00:10:31.309 VAGRANT_PACKAGE_BOX=0 00:10:31.309 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:10:31.309 FORCE_DISTRO=true 00:10:31.309 VAGRANT_BOX_VERSION= 00:10:31.309 EXTRA_VAGRANTFILES= 00:10:31.309 NIC_MODEL=e1000 00:10:31.309 00:10:31.309 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:10:31.309 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:10:34.616 Bringing machine 'default' up with 'libvirt' provider... 00:10:35.550 ==> default: Creating image (snapshot of base box volume). 00:10:35.550 ==> default: Creating domain with the following settings... 00:10:35.550 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721907572_b70f2d28d2c05aa77544 00:10:35.550 ==> default: -- Domain type: kvm 00:10:35.550 ==> default: -- Cpus: 10 00:10:35.550 ==> default: -- Feature: acpi 00:10:35.550 ==> default: -- Feature: apic 00:10:35.550 ==> default: -- Feature: pae 00:10:35.550 ==> default: -- Memory: 12288M 00:10:35.550 ==> default: -- Memory Backing: hugepages: 00:10:35.550 ==> default: -- Management MAC: 00:10:35.550 ==> default: -- Loader: 00:10:35.550 ==> default: -- Nvram: 00:10:35.550 ==> default: -- Base box: spdk/fedora38 00:10:35.550 ==> default: -- Storage pool: default 00:10:35.550 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721907572_b70f2d28d2c05aa77544.img (20G) 00:10:35.550 ==> default: -- Volume Cache: default 00:10:35.550 ==> default: -- Kernel: 00:10:35.550 ==> default: -- Initrd: 00:10:35.550 ==> default: -- Graphics Type: vnc 00:10:35.550 ==> default: -- Graphics Port: -1 00:10:35.550 ==> default: -- Graphics IP: 127.0.0.1 00:10:35.550 ==> default: -- Graphics Password: Not defined 00:10:35.550 ==> default: -- Video Type: cirrus 00:10:35.550 ==> default: -- Video VRAM: 9216 00:10:35.550 ==> default: -- Sound Type: 00:10:35.550 ==> default: -- Keymap: en-us 00:10:35.550 ==> default: -- TPM Path: 00:10:35.550 ==> default: -- INPUT: type=mouse, bus=ps2 00:10:35.550 ==> default: -- Command line args: 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:10:35.550 ==> default: -> value=-drive, 00:10:35.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:10:35.550 ==> default: -> value=-drive, 00:10:35.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:10:35.550 ==> default: -> value=-drive, 00:10:35.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:35.550 ==> default: -> value=-drive, 00:10:35.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:35.550 ==> default: -> value=-drive, 00:10:35.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:10:35.550 ==> default: -> value=-drive, 00:10:35.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:10:35.550 ==> default: -> value=-device, 00:10:35.550 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:35.845 ==> default: Creating shared folders metadata... 00:10:35.845 ==> default: Starting domain. 00:10:37.227 ==> default: Waiting for domain to get an IP address... 00:10:55.302 ==> default: Waiting for SSH to become available... 00:10:56.271 ==> default: Configuring and enabling network interfaces... 00:11:00.454 default: SSH address: 192.168.121.232:22 00:11:00.454 default: SSH username: vagrant 00:11:00.454 default: SSH auth method: private key 00:11:01.834 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:11:09.955 ==> default: Mounting SSHFS shared folder... 00:11:10.570 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:11:10.570 ==> default: Checking Mount.. 00:11:11.943 ==> default: Folder Successfully Mounted! 00:11:11.943 ==> default: Running provisioner: file... 00:11:12.508 default: ~/.gitconfig => .gitconfig 00:11:12.766 00:11:12.766 SUCCESS! 00:11:12.766 00:11:12.766 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:11:12.766 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:11:12.766 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:11:12.766 00:11:12.775 [Pipeline] } 00:11:12.791 [Pipeline] // stage 00:11:12.800 [Pipeline] dir 00:11:12.801 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:11:12.803 [Pipeline] { 00:11:12.815 [Pipeline] catchError 00:11:12.817 [Pipeline] { 00:11:12.829 [Pipeline] sh 00:11:13.137 + vagrant ssh-config --host vagrant 00:11:13.137 + sed -ne /^Host/,$p 00:11:13.137 + tee ssh_conf 00:11:18.398 Host vagrant 00:11:18.398 HostName 192.168.121.232 00:11:18.398 User vagrant 00:11:18.398 Port 22 00:11:18.398 UserKnownHostsFile /dev/null 00:11:18.398 StrictHostKeyChecking no 00:11:18.398 PasswordAuthentication no 00:11:18.398 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:11:18.398 IdentitiesOnly yes 00:11:18.398 LogLevel FATAL 00:11:18.398 ForwardAgent yes 00:11:18.398 ForwardX11 yes 00:11:18.398 00:11:18.411 [Pipeline] withEnv 00:11:18.413 [Pipeline] { 00:11:18.428 [Pipeline] sh 00:11:18.703 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:11:18.703 source /etc/os-release 00:11:18.703 [[ -e /image.version ]] && img=$(< /image.version) 00:11:18.703 # Minimal, systemd-like check. 00:11:18.703 if [[ -e /.dockerenv ]]; then 00:11:18.703 # Clear garbage from the node's name: 00:11:18.703 # agt-er_autotest_547-896 -> autotest_547-896 00:11:18.703 # $HOSTNAME is the actual container id 00:11:18.703 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:11:18.703 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:11:18.703 # We can assume this is a mount from a host where container is running, 00:11:18.703 # so fetch its hostname to easily identify the target swarm worker. 00:11:18.703 container="$(< /etc/hostname) ($agent)" 00:11:18.703 else 00:11:18.703 # Fallback 00:11:18.703 container=$agent 00:11:18.703 fi 00:11:18.703 fi 00:11:18.703 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:11:18.703 00:11:18.713 [Pipeline] } 00:11:18.732 [Pipeline] // withEnv 00:11:18.741 [Pipeline] setCustomBuildProperty 00:11:18.756 [Pipeline] stage 00:11:18.758 [Pipeline] { (Tests) 00:11:18.777 [Pipeline] sh 00:11:19.052 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:11:19.065 [Pipeline] sh 00:11:19.339 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:11:19.354 [Pipeline] timeout 00:11:19.354 Timeout set to expire in 40 min 00:11:19.356 [Pipeline] { 00:11:19.371 [Pipeline] sh 00:11:19.696 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:11:20.262 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:11:20.274 [Pipeline] sh 00:11:20.551 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:11:20.564 [Pipeline] sh 00:11:20.842 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:11:20.858 [Pipeline] sh 00:11:21.134 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:11:21.134 ++ readlink -f spdk_repo 00:11:21.134 + DIR_ROOT=/home/vagrant/spdk_repo 00:11:21.134 + [[ -n /home/vagrant/spdk_repo ]] 00:11:21.134 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:11:21.134 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:11:21.134 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:11:21.134 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:11:21.134 + [[ -d /home/vagrant/spdk_repo/output ]] 00:11:21.134 + [[ nvme-vg-autotest == pkgdep-* ]] 00:11:21.134 + cd /home/vagrant/spdk_repo 00:11:21.134 + source /etc/os-release 00:11:21.134 ++ NAME='Fedora Linux' 00:11:21.134 ++ VERSION='38 (Cloud Edition)' 00:11:21.134 ++ ID=fedora 00:11:21.134 ++ VERSION_ID=38 00:11:21.134 ++ VERSION_CODENAME= 00:11:21.134 ++ PLATFORM_ID=platform:f38 00:11:21.134 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:11:21.134 ++ ANSI_COLOR='0;38;2;60;110;180' 00:11:21.134 ++ LOGO=fedora-logo-icon 00:11:21.134 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:11:21.134 ++ HOME_URL=https://fedoraproject.org/ 00:11:21.134 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:11:21.134 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:11:21.134 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:11:21.134 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:11:21.134 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:11:21.134 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:11:21.134 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:11:21.134 ++ SUPPORT_END=2024-05-14 00:11:21.134 ++ VARIANT='Cloud Edition' 00:11:21.134 ++ VARIANT_ID=cloud 00:11:21.134 + uname -a 00:11:21.134 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:11:21.134 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:21.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:21.958 Hugepages 00:11:21.958 node hugesize free / total 00:11:21.958 node0 1048576kB 0 / 0 00:11:21.958 node0 2048kB 0 / 0 00:11:21.958 00:11:21.958 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:21.958 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:21.958 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:21.958 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:21.958 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:11:21.958 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:11:21.958 + rm -f /tmp/spdk-ld-path 00:11:21.958 + source autorun-spdk.conf 00:11:21.958 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:21.958 ++ SPDK_TEST_NVME=1 00:11:21.958 ++ SPDK_TEST_FTL=1 00:11:21.958 ++ SPDK_TEST_ISAL=1 00:11:21.958 ++ SPDK_RUN_ASAN=1 00:11:21.958 ++ SPDK_RUN_UBSAN=1 00:11:21.958 ++ SPDK_TEST_XNVME=1 00:11:21.958 ++ SPDK_TEST_NVME_FDP=1 00:11:21.958 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:21.958 ++ RUN_NIGHTLY=0 00:11:21.958 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:11:21.958 + [[ -n '' ]] 00:11:21.958 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:11:21.958 + for M in /var/spdk/build-*-manifest.txt 00:11:21.958 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:11:21.958 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:21.958 + for M in /var/spdk/build-*-manifest.txt 00:11:21.958 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:11:21.958 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:21.958 ++ uname 00:11:21.958 + [[ Linux == \L\i\n\u\x ]] 00:11:21.958 + sudo dmesg -T 00:11:22.216 + sudo dmesg --clear 00:11:22.216 + dmesg_pid=5206 00:11:22.216 + [[ Fedora Linux == FreeBSD ]] 00:11:22.216 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:22.216 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:22.216 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:11:22.216 + sudo dmesg -Tw 00:11:22.216 + [[ -x /usr/src/fio-static/fio ]] 00:11:22.216 + export FIO_BIN=/usr/src/fio-static/fio 00:11:22.216 + FIO_BIN=/usr/src/fio-static/fio 00:11:22.216 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:11:22.216 + [[ ! -v VFIO_QEMU_BIN ]] 00:11:22.216 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:11:22.216 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:22.216 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:22.216 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:11:22.216 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:22.216 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:22.216 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:22.216 Test configuration: 00:11:22.216 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:22.216 SPDK_TEST_NVME=1 00:11:22.216 SPDK_TEST_FTL=1 00:11:22.216 SPDK_TEST_ISAL=1 00:11:22.216 SPDK_RUN_ASAN=1 00:11:22.216 SPDK_RUN_UBSAN=1 00:11:22.216 SPDK_TEST_XNVME=1 00:11:22.216 SPDK_TEST_NVME_FDP=1 00:11:22.216 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:22.216 RUN_NIGHTLY=0 11:40:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.216 11:40:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:22.216 11:40:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.216 11:40:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.216 11:40:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.217 11:40:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.217 11:40:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.217 11:40:19 -- paths/export.sh@5 -- $ export PATH 00:11:22.217 11:40:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.217 11:40:19 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:11:22.217 11:40:19 -- common/autobuild_common.sh@447 -- $ date +%s 00:11:22.217 11:40:19 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721907619.XXXXXX 00:11:22.217 11:40:19 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721907619.KyxIDw 00:11:22.217 11:40:19 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:11:22.217 11:40:19 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:11:22.217 11:40:19 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:11:22.217 11:40:19 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:11:22.217 11:40:19 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:11:22.217 11:40:19 -- common/autobuild_common.sh@463 -- $ get_config_params 00:11:22.217 11:40:19 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:11:22.217 11:40:19 -- common/autotest_common.sh@10 -- $ set +x 00:11:22.217 11:40:19 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:11:22.217 11:40:19 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:11:22.217 11:40:19 -- pm/common@17 -- $ local monitor 00:11:22.217 11:40:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.217 11:40:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.217 11:40:19 -- pm/common@25 -- $ sleep 1 00:11:22.217 11:40:19 -- pm/common@21 -- $ date +%s 00:11:22.217 11:40:19 -- pm/common@21 -- $ date +%s 00:11:22.217 11:40:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721907619 00:11:22.217 11:40:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721907619 00:11:22.217 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721907619_collect-vmstat.pm.log 00:11:22.217 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721907619_collect-cpu-load.pm.log 00:11:23.150 11:40:20 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:11:23.150 11:40:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:11:23.150 11:40:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:11:23.150 11:40:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:11:23.150 11:40:20 -- spdk/autobuild.sh@16 -- $ date -u 00:11:23.150 Thu Jul 25 11:40:20 AM UTC 2024 00:11:23.150 11:40:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:11:23.150 v24.09-pre-321-g704257090 00:11:23.150 11:40:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:11:23.150 11:40:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:11:23.150 11:40:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:11:23.150 11:40:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:11:23.150 11:40:20 -- common/autotest_common.sh@10 -- $ set +x 00:11:23.150 ************************************ 00:11:23.150 START TEST asan 00:11:23.150 ************************************ 00:11:23.150 using asan 00:11:23.150 11:40:20 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:11:23.150 00:11:23.150 real 0m0.000s 00:11:23.150 user 0m0.000s 00:11:23.150 sys 0m0.000s 00:11:23.408 11:40:20 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:11:23.408 11:40:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:11:23.408 ************************************ 00:11:23.408 END TEST asan 00:11:23.408 ************************************ 00:11:23.408 11:40:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:11:23.408 11:40:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:11:23.408 11:40:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:11:23.408 11:40:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:11:23.408 11:40:20 -- common/autotest_common.sh@10 -- $ set +x 00:11:23.408 ************************************ 00:11:23.408 START TEST ubsan 00:11:23.408 ************************************ 00:11:23.408 using ubsan 00:11:23.408 11:40:20 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:11:23.408 00:11:23.408 real 0m0.000s 00:11:23.408 user 0m0.000s 00:11:23.408 sys 0m0.000s 00:11:23.408 11:40:20 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:11:23.408 11:40:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:11:23.408 ************************************ 00:11:23.408 END TEST ubsan 00:11:23.408 ************************************ 00:11:23.408 11:40:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:11:23.408 11:40:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:11:23.408 11:40:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:11:23.408 11:40:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:11:23.408 11:40:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:11:23.408 11:40:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:11:23.408 11:40:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:11:23.408 11:40:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:11:23.408 11:40:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:11:23.408 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:23.408 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:23.974 Using 'verbs' RDMA provider 00:11:37.102 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:11:49.302 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:11:49.302 Creating mk/config.mk...done. 00:11:49.302 Creating mk/cc.flags.mk...done. 00:11:49.302 Type 'make' to build. 00:11:49.302 11:40:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:11:49.302 11:40:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:11:49.302 11:40:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:11:49.302 11:40:45 -- common/autotest_common.sh@10 -- $ set +x 00:11:49.302 ************************************ 00:11:49.302 START TEST make 00:11:49.302 ************************************ 00:11:49.302 11:40:45 make -- common/autotest_common.sh@1125 -- $ make -j10 00:11:49.302 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:11:49.302 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:11:49.302 meson setup builddir \ 00:11:49.302 -Dwith-libaio=enabled \ 00:11:49.302 -Dwith-liburing=enabled \ 00:11:49.302 -Dwith-libvfn=disabled \ 00:11:49.302 -Dwith-spdk=false && \ 00:11:49.302 meson compile -C builddir && \ 00:11:49.302 cd -) 00:11:49.302 make[1]: Nothing to be done for 'all'. 00:11:52.583 The Meson build system 00:11:52.583 Version: 1.3.1 00:11:52.583 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:11:52.583 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:11:52.583 Build type: native build 00:11:52.583 Project name: xnvme 00:11:52.583 Project version: 0.7.3 00:11:52.583 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:11:52.583 C linker for the host machine: cc ld.bfd 2.39-16 00:11:52.583 Host machine cpu family: x86_64 00:11:52.583 Host machine cpu: x86_64 00:11:52.583 Message: host_machine.system: linux 00:11:52.583 Compiler for C supports arguments -Wno-missing-braces: YES 00:11:52.583 Compiler for C supports arguments -Wno-cast-function-type: YES 00:11:52.583 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:11:52.583 Run-time dependency threads found: YES 00:11:52.583 Has header "setupapi.h" : NO 00:11:52.583 Has header "linux/blkzoned.h" : YES 00:11:52.583 Has header "linux/blkzoned.h" : YES (cached) 00:11:52.583 Has header "libaio.h" : YES 00:11:52.583 Library aio found: YES 00:11:52.583 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:11:52.583 Run-time dependency liburing found: YES 2.2 00:11:52.583 Dependency libvfn skipped: feature with-libvfn disabled 00:11:52.583 Run-time dependency appleframeworks found: NO (tried framework) 00:11:52.583 Run-time dependency appleframeworks found: NO (tried framework) 00:11:52.583 Configuring xnvme_config.h using configuration 00:11:52.583 Configuring xnvme.spec using configuration 00:11:52.583 Run-time dependency bash-completion found: YES 2.11 00:11:52.583 Message: Bash-completions: /usr/share/bash-completion/completions 00:11:52.583 Program cp found: YES (/usr/bin/cp) 00:11:52.583 Has header "winsock2.h" : NO 00:11:52.583 Has header "dbghelp.h" : NO 00:11:52.583 Library rpcrt4 found: NO 00:11:52.583 Library rt found: YES 00:11:52.583 Checking for function "clock_gettime" with dependency -lrt: YES 00:11:52.583 Found CMake: /usr/bin/cmake (3.27.7) 00:11:52.583 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:11:52.583 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:11:52.583 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:11:52.583 Build targets in project: 32 00:11:52.583 00:11:52.583 xnvme 0.7.3 00:11:52.583 00:11:52.583 User defined options 00:11:52.583 with-libaio : enabled 00:11:52.583 with-liburing: enabled 00:11:52.583 with-libvfn : disabled 00:11:52.583 with-spdk : false 00:11:52.583 00:11:52.583 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:52.841 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:11:52.841 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:11:53.098 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:11:53.098 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:11:53.098 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:11:53.098 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:11:53.098 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:11:53.098 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:11:53.099 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:11:53.099 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:11:53.099 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:11:53.099 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:11:53.099 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:11:53.099 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:11:53.099 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:11:53.099 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:11:53.099 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:11:53.099 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:11:53.099 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:11:53.356 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:11:53.356 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:11:53.356 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:11:53.356 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:11:53.356 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:11:53.356 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:11:53.356 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:11:53.356 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:11:53.356 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:11:53.356 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:11:53.356 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:11:53.356 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:11:53.356 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:11:53.356 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:11:53.356 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:11:53.356 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:11:53.356 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:11:53.356 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:11:53.356 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:11:53.356 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:11:53.356 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:11:53.356 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:11:53.356 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:11:53.356 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:11:53.356 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:11:53.356 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:11:53.356 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:11:53.614 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:11:53.614 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:11:53.614 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:11:53.614 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:11:53.614 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:11:53.614 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:11:53.614 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:11:53.614 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:11:53.614 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:11:53.614 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:11:53.614 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:11:53.614 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:11:53.614 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:11:53.614 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:11:53.614 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:11:53.614 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:11:53.872 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:11:53.872 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:11:53.872 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:11:53.872 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:11:53.872 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:11:53.872 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:11:53.872 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:11:53.872 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:11:53.872 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:11:53.872 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:11:53.872 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:11:54.129 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:11:54.129 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:11:54.129 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:11:54.129 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:11:54.129 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:11:54.129 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:11:54.129 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:11:54.129 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:11:54.129 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:11:54.129 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:11:54.129 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:11:54.129 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:11:54.129 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:11:54.129 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:11:54.386 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:11:54.386 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:11:54.386 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:11:54.386 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:11:54.386 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:11:54.386 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:11:54.386 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:11:54.387 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:11:54.387 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:11:54.387 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:11:54.387 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:11:54.387 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:11:54.387 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:11:54.387 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:11:54.387 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:11:54.387 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:11:54.387 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:11:54.387 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:11:54.387 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:11:54.387 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:11:54.387 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:11:54.387 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:11:54.387 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:11:54.387 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:11:54.645 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:11:54.645 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:11:54.645 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:11:54.645 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:11:54.645 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:11:54.645 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:11:54.645 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:11:54.645 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:11:54.645 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:11:54.645 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:11:54.645 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:11:54.645 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:11:54.645 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:11:54.645 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:11:54.645 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:11:54.645 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:11:54.645 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:11:54.645 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:11:54.645 [129/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:11:54.645 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:11:54.645 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:11:54.645 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:11:54.645 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:11:54.645 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:11:54.902 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:11:54.902 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:11:54.902 [137/203] Linking target lib/libxnvme.so 00:11:54.902 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:11:54.902 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:11:54.902 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:11:54.902 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:11:54.902 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:11:54.902 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:11:54.902 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:11:54.902 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:11:55.159 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:11:55.159 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:11:55.159 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:11:55.159 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:11:55.159 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:11:55.159 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:11:55.159 [152/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:11:55.159 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:11:55.159 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:11:55.159 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:11:55.159 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:11:55.419 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:11:55.419 [158/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:11:55.419 [159/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:11:55.419 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:11:55.419 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:11:55.419 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:11:55.419 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:11:55.419 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:11:55.419 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:11:55.419 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:11:55.419 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:11:55.692 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:11:55.692 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:11:55.692 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:11:55.692 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:11:55.692 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:11:55.949 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:11:55.949 [174/203] Linking static target lib/libxnvme.a 00:11:55.949 [175/203] Linking target tests/xnvme_tests_async_intf 00:11:55.949 [176/203] Linking target tests/xnvme_tests_lblk 00:11:55.949 [177/203] Linking target tests/xnvme_tests_cli 00:11:55.949 [178/203] Linking target tests/xnvme_tests_znd_state 00:11:55.949 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:11:55.949 [180/203] Linking target tests/xnvme_tests_znd_append 00:11:55.949 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:11:55.949 [182/203] Linking target tests/xnvme_tests_ioworker 00:11:55.949 [183/203] Linking target tests/xnvme_tests_enum 00:11:55.949 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:11:55.949 [185/203] Linking target tests/xnvme_tests_buf 00:11:55.949 [186/203] Linking target tests/xnvme_tests_scc 00:11:55.949 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:11:55.949 [188/203] Linking target tests/xnvme_tests_map 00:11:55.949 [189/203] Linking target tools/xdd 00:11:55.949 [190/203] Linking target tools/lblk 00:11:56.205 [191/203] Linking target tools/xnvme 00:11:56.205 [192/203] Linking target tests/xnvme_tests_kvs 00:11:56.205 [193/203] Linking target examples/xnvme_dev 00:11:56.205 [194/203] Linking target examples/xnvme_enum 00:11:56.205 [195/203] Linking target tools/zoned 00:11:56.205 [196/203] Linking target examples/xnvme_hello 00:11:56.205 [197/203] Linking target tools/xnvme_file 00:11:56.205 [198/203] Linking target examples/xnvme_single_async 00:11:56.205 [199/203] Linking target examples/zoned_io_async 00:11:56.205 [200/203] Linking target tools/kvs 00:11:56.205 [201/203] Linking target examples/zoned_io_sync 00:11:56.205 [202/203] Linking target examples/xnvme_single_sync 00:11:56.205 [203/203] Linking target examples/xnvme_io_async 00:11:56.205 INFO: autodetecting backend as ninja 00:11:56.205 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:11:56.205 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:12:06.168 The Meson build system 00:12:06.169 Version: 1.3.1 00:12:06.169 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:12:06.169 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:06.169 Build type: native build 00:12:06.169 Program cat found: YES (/usr/bin/cat) 00:12:06.169 Project name: DPDK 00:12:06.169 Project version: 24.03.0 00:12:06.169 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:12:06.169 C linker for the host machine: cc ld.bfd 2.39-16 00:12:06.169 Host machine cpu family: x86_64 00:12:06.169 Host machine cpu: x86_64 00:12:06.169 Message: ## Building in Developer Mode ## 00:12:06.169 Program pkg-config found: YES (/usr/bin/pkg-config) 00:12:06.169 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:06.169 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:06.169 Program python3 found: YES (/usr/bin/python3) 00:12:06.169 Program cat found: YES (/usr/bin/cat) 00:12:06.169 Compiler for C supports arguments -march=native: YES 00:12:06.169 Checking for size of "void *" : 8 00:12:06.169 Checking for size of "void *" : 8 (cached) 00:12:06.169 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:12:06.169 Library m found: YES 00:12:06.169 Library numa found: YES 00:12:06.169 Has header "numaif.h" : YES 00:12:06.169 Library fdt found: NO 00:12:06.169 Library execinfo found: NO 00:12:06.169 Has header "execinfo.h" : YES 00:12:06.169 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:12:06.169 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:06.169 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:06.169 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:06.169 Run-time dependency openssl found: YES 3.0.9 00:12:06.169 Run-time dependency libpcap found: YES 1.10.4 00:12:06.169 Has header "pcap.h" with dependency libpcap: YES 00:12:06.169 Compiler for C supports arguments -Wcast-qual: YES 00:12:06.169 Compiler for C supports arguments -Wdeprecated: YES 00:12:06.169 Compiler for C supports arguments -Wformat: YES 00:12:06.169 Compiler for C supports arguments -Wformat-nonliteral: NO 00:12:06.169 Compiler for C supports arguments -Wformat-security: NO 00:12:06.169 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:06.169 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:06.169 Compiler for C supports arguments -Wnested-externs: YES 00:12:06.169 Compiler for C supports arguments -Wold-style-definition: YES 00:12:06.169 Compiler for C supports arguments -Wpointer-arith: YES 00:12:06.169 Compiler for C supports arguments -Wsign-compare: YES 00:12:06.169 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:06.169 Compiler for C supports arguments -Wundef: YES 00:12:06.169 Compiler for C supports arguments -Wwrite-strings: YES 00:12:06.169 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:06.169 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:12:06.169 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:06.169 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:12:06.169 Program objdump found: YES (/usr/bin/objdump) 00:12:06.169 Compiler for C supports arguments -mavx512f: YES 00:12:06.169 Checking if "AVX512 checking" compiles: YES 00:12:06.169 Fetching value of define "__SSE4_2__" : 1 00:12:06.169 Fetching value of define "__AES__" : 1 00:12:06.169 Fetching value of define "__AVX__" : 1 00:12:06.169 Fetching value of define "__AVX2__" : 1 00:12:06.169 Fetching value of define "__AVX512BW__" : (undefined) 00:12:06.169 Fetching value of define "__AVX512CD__" : (undefined) 00:12:06.169 Fetching value of define "__AVX512DQ__" : (undefined) 00:12:06.169 Fetching value of define "__AVX512F__" : (undefined) 00:12:06.169 Fetching value of define "__AVX512VL__" : (undefined) 00:12:06.169 Fetching value of define "__PCLMUL__" : 1 00:12:06.169 Fetching value of define "__RDRND__" : 1 00:12:06.169 Fetching value of define "__RDSEED__" : 1 00:12:06.169 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:06.169 Fetching value of define "__znver1__" : (undefined) 00:12:06.169 Fetching value of define "__znver2__" : (undefined) 00:12:06.169 Fetching value of define "__znver3__" : (undefined) 00:12:06.169 Fetching value of define "__znver4__" : (undefined) 00:12:06.169 Library asan found: YES 00:12:06.169 Compiler for C supports arguments -Wno-format-truncation: YES 00:12:06.169 Message: lib/log: Defining dependency "log" 00:12:06.169 Message: lib/kvargs: Defining dependency "kvargs" 00:12:06.169 Message: lib/telemetry: Defining dependency "telemetry" 00:12:06.169 Library rt found: YES 00:12:06.169 Checking for function "getentropy" : NO 00:12:06.169 Message: lib/eal: Defining dependency "eal" 00:12:06.169 Message: lib/ring: Defining dependency "ring" 00:12:06.169 Message: lib/rcu: Defining dependency "rcu" 00:12:06.169 Message: lib/mempool: Defining dependency "mempool" 00:12:06.169 Message: lib/mbuf: Defining dependency "mbuf" 00:12:06.169 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:06.169 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:12:06.169 Compiler for C supports arguments -mpclmul: YES 00:12:06.169 Compiler for C supports arguments -maes: YES 00:12:06.169 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:06.169 Compiler for C supports arguments -mavx512bw: YES 00:12:06.169 Compiler for C supports arguments -mavx512dq: YES 00:12:06.169 Compiler for C supports arguments -mavx512vl: YES 00:12:06.169 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:06.169 Compiler for C supports arguments -mavx2: YES 00:12:06.169 Compiler for C supports arguments -mavx: YES 00:12:06.169 Message: lib/net: Defining dependency "net" 00:12:06.169 Message: lib/meter: Defining dependency "meter" 00:12:06.169 Message: lib/ethdev: Defining dependency "ethdev" 00:12:06.169 Message: lib/pci: Defining dependency "pci" 00:12:06.169 Message: lib/cmdline: Defining dependency "cmdline" 00:12:06.169 Message: lib/hash: Defining dependency "hash" 00:12:06.169 Message: lib/timer: Defining dependency "timer" 00:12:06.169 Message: lib/compressdev: Defining dependency "compressdev" 00:12:06.169 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:06.169 Message: lib/dmadev: Defining dependency "dmadev" 00:12:06.169 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:06.169 Message: lib/power: Defining dependency "power" 00:12:06.169 Message: lib/reorder: Defining dependency "reorder" 00:12:06.169 Message: lib/security: Defining dependency "security" 00:12:06.169 Has header "linux/userfaultfd.h" : YES 00:12:06.169 Has header "linux/vduse.h" : YES 00:12:06.169 Message: lib/vhost: Defining dependency "vhost" 00:12:06.169 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:12:06.169 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:06.169 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:06.169 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:06.169 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:06.169 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:06.169 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:06.169 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:06.169 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:06.169 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:06.169 Program doxygen found: YES (/usr/bin/doxygen) 00:12:06.169 Configuring doxy-api-html.conf using configuration 00:12:06.169 Configuring doxy-api-man.conf using configuration 00:12:06.169 Program mandb found: YES (/usr/bin/mandb) 00:12:06.169 Program sphinx-build found: NO 00:12:06.169 Configuring rte_build_config.h using configuration 00:12:06.169 Message: 00:12:06.169 ================= 00:12:06.169 Applications Enabled 00:12:06.169 ================= 00:12:06.169 00:12:06.169 apps: 00:12:06.169 00:12:06.169 00:12:06.169 Message: 00:12:06.169 ================= 00:12:06.169 Libraries Enabled 00:12:06.169 ================= 00:12:06.169 00:12:06.169 libs: 00:12:06.169 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:06.170 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:06.170 cryptodev, dmadev, power, reorder, security, vhost, 00:12:06.170 00:12:06.170 Message: 00:12:06.170 =============== 00:12:06.170 Drivers Enabled 00:12:06.170 =============== 00:12:06.170 00:12:06.170 common: 00:12:06.170 00:12:06.170 bus: 00:12:06.170 pci, vdev, 00:12:06.170 mempool: 00:12:06.170 ring, 00:12:06.170 dma: 00:12:06.170 00:12:06.170 net: 00:12:06.170 00:12:06.170 crypto: 00:12:06.170 00:12:06.170 compress: 00:12:06.170 00:12:06.170 vdpa: 00:12:06.170 00:12:06.170 00:12:06.170 Message: 00:12:06.170 ================= 00:12:06.170 Content Skipped 00:12:06.170 ================= 00:12:06.170 00:12:06.170 apps: 00:12:06.170 dumpcap: explicitly disabled via build config 00:12:06.170 graph: explicitly disabled via build config 00:12:06.170 pdump: explicitly disabled via build config 00:12:06.170 proc-info: explicitly disabled via build config 00:12:06.170 test-acl: explicitly disabled via build config 00:12:06.170 test-bbdev: explicitly disabled via build config 00:12:06.170 test-cmdline: explicitly disabled via build config 00:12:06.170 test-compress-perf: explicitly disabled via build config 00:12:06.170 test-crypto-perf: explicitly disabled via build config 00:12:06.170 test-dma-perf: explicitly disabled via build config 00:12:06.170 test-eventdev: explicitly disabled via build config 00:12:06.170 test-fib: explicitly disabled via build config 00:12:06.170 test-flow-perf: explicitly disabled via build config 00:12:06.170 test-gpudev: explicitly disabled via build config 00:12:06.170 test-mldev: explicitly disabled via build config 00:12:06.170 test-pipeline: explicitly disabled via build config 00:12:06.170 test-pmd: explicitly disabled via build config 00:12:06.170 test-regex: explicitly disabled via build config 00:12:06.170 test-sad: explicitly disabled via build config 00:12:06.170 test-security-perf: explicitly disabled via build config 00:12:06.170 00:12:06.170 libs: 00:12:06.170 argparse: explicitly disabled via build config 00:12:06.170 metrics: explicitly disabled via build config 00:12:06.170 acl: explicitly disabled via build config 00:12:06.170 bbdev: explicitly disabled via build config 00:12:06.170 bitratestats: explicitly disabled via build config 00:12:06.170 bpf: explicitly disabled via build config 00:12:06.170 cfgfile: explicitly disabled via build config 00:12:06.170 distributor: explicitly disabled via build config 00:12:06.170 efd: explicitly disabled via build config 00:12:06.170 eventdev: explicitly disabled via build config 00:12:06.170 dispatcher: explicitly disabled via build config 00:12:06.170 gpudev: explicitly disabled via build config 00:12:06.170 gro: explicitly disabled via build config 00:12:06.170 gso: explicitly disabled via build config 00:12:06.170 ip_frag: explicitly disabled via build config 00:12:06.170 jobstats: explicitly disabled via build config 00:12:06.170 latencystats: explicitly disabled via build config 00:12:06.170 lpm: explicitly disabled via build config 00:12:06.170 member: explicitly disabled via build config 00:12:06.170 pcapng: explicitly disabled via build config 00:12:06.170 rawdev: explicitly disabled via build config 00:12:06.170 regexdev: explicitly disabled via build config 00:12:06.170 mldev: explicitly disabled via build config 00:12:06.170 rib: explicitly disabled via build config 00:12:06.170 sched: explicitly disabled via build config 00:12:06.170 stack: explicitly disabled via build config 00:12:06.170 ipsec: explicitly disabled via build config 00:12:06.170 pdcp: explicitly disabled via build config 00:12:06.170 fib: explicitly disabled via build config 00:12:06.170 port: explicitly disabled via build config 00:12:06.170 pdump: explicitly disabled via build config 00:12:06.170 table: explicitly disabled via build config 00:12:06.170 pipeline: explicitly disabled via build config 00:12:06.170 graph: explicitly disabled via build config 00:12:06.170 node: explicitly disabled via build config 00:12:06.170 00:12:06.170 drivers: 00:12:06.170 common/cpt: not in enabled drivers build config 00:12:06.170 common/dpaax: not in enabled drivers build config 00:12:06.170 common/iavf: not in enabled drivers build config 00:12:06.170 common/idpf: not in enabled drivers build config 00:12:06.170 common/ionic: not in enabled drivers build config 00:12:06.170 common/mvep: not in enabled drivers build config 00:12:06.170 common/octeontx: not in enabled drivers build config 00:12:06.170 bus/auxiliary: not in enabled drivers build config 00:12:06.170 bus/cdx: not in enabled drivers build config 00:12:06.170 bus/dpaa: not in enabled drivers build config 00:12:06.170 bus/fslmc: not in enabled drivers build config 00:12:06.170 bus/ifpga: not in enabled drivers build config 00:12:06.170 bus/platform: not in enabled drivers build config 00:12:06.170 bus/uacce: not in enabled drivers build config 00:12:06.170 bus/vmbus: not in enabled drivers build config 00:12:06.170 common/cnxk: not in enabled drivers build config 00:12:06.170 common/mlx5: not in enabled drivers build config 00:12:06.170 common/nfp: not in enabled drivers build config 00:12:06.170 common/nitrox: not in enabled drivers build config 00:12:06.170 common/qat: not in enabled drivers build config 00:12:06.170 common/sfc_efx: not in enabled drivers build config 00:12:06.170 mempool/bucket: not in enabled drivers build config 00:12:06.170 mempool/cnxk: not in enabled drivers build config 00:12:06.170 mempool/dpaa: not in enabled drivers build config 00:12:06.170 mempool/dpaa2: not in enabled drivers build config 00:12:06.170 mempool/octeontx: not in enabled drivers build config 00:12:06.170 mempool/stack: not in enabled drivers build config 00:12:06.170 dma/cnxk: not in enabled drivers build config 00:12:06.170 dma/dpaa: not in enabled drivers build config 00:12:06.170 dma/dpaa2: not in enabled drivers build config 00:12:06.170 dma/hisilicon: not in enabled drivers build config 00:12:06.170 dma/idxd: not in enabled drivers build config 00:12:06.170 dma/ioat: not in enabled drivers build config 00:12:06.170 dma/skeleton: not in enabled drivers build config 00:12:06.170 net/af_packet: not in enabled drivers build config 00:12:06.170 net/af_xdp: not in enabled drivers build config 00:12:06.170 net/ark: not in enabled drivers build config 00:12:06.170 net/atlantic: not in enabled drivers build config 00:12:06.170 net/avp: not in enabled drivers build config 00:12:06.170 net/axgbe: not in enabled drivers build config 00:12:06.170 net/bnx2x: not in enabled drivers build config 00:12:06.170 net/bnxt: not in enabled drivers build config 00:12:06.170 net/bonding: not in enabled drivers build config 00:12:06.170 net/cnxk: not in enabled drivers build config 00:12:06.170 net/cpfl: not in enabled drivers build config 00:12:06.170 net/cxgbe: not in enabled drivers build config 00:12:06.170 net/dpaa: not in enabled drivers build config 00:12:06.170 net/dpaa2: not in enabled drivers build config 00:12:06.170 net/e1000: not in enabled drivers build config 00:12:06.170 net/ena: not in enabled drivers build config 00:12:06.170 net/enetc: not in enabled drivers build config 00:12:06.170 net/enetfec: not in enabled drivers build config 00:12:06.170 net/enic: not in enabled drivers build config 00:12:06.170 net/failsafe: not in enabled drivers build config 00:12:06.170 net/fm10k: not in enabled drivers build config 00:12:06.170 net/gve: not in enabled drivers build config 00:12:06.170 net/hinic: not in enabled drivers build config 00:12:06.170 net/hns3: not in enabled drivers build config 00:12:06.170 net/i40e: not in enabled drivers build config 00:12:06.170 net/iavf: not in enabled drivers build config 00:12:06.170 net/ice: not in enabled drivers build config 00:12:06.170 net/idpf: not in enabled drivers build config 00:12:06.170 net/igc: not in enabled drivers build config 00:12:06.170 net/ionic: not in enabled drivers build config 00:12:06.170 net/ipn3ke: not in enabled drivers build config 00:12:06.170 net/ixgbe: not in enabled drivers build config 00:12:06.170 net/mana: not in enabled drivers build config 00:12:06.170 net/memif: not in enabled drivers build config 00:12:06.170 net/mlx4: not in enabled drivers build config 00:12:06.170 net/mlx5: not in enabled drivers build config 00:12:06.170 net/mvneta: not in enabled drivers build config 00:12:06.170 net/mvpp2: not in enabled drivers build config 00:12:06.170 net/netvsc: not in enabled drivers build config 00:12:06.170 net/nfb: not in enabled drivers build config 00:12:06.170 net/nfp: not in enabled drivers build config 00:12:06.170 net/ngbe: not in enabled drivers build config 00:12:06.170 net/null: not in enabled drivers build config 00:12:06.170 net/octeontx: not in enabled drivers build config 00:12:06.170 net/octeon_ep: not in enabled drivers build config 00:12:06.170 net/pcap: not in enabled drivers build config 00:12:06.170 net/pfe: not in enabled drivers build config 00:12:06.170 net/qede: not in enabled drivers build config 00:12:06.170 net/ring: not in enabled drivers build config 00:12:06.170 net/sfc: not in enabled drivers build config 00:12:06.170 net/softnic: not in enabled drivers build config 00:12:06.170 net/tap: not in enabled drivers build config 00:12:06.170 net/thunderx: not in enabled drivers build config 00:12:06.170 net/txgbe: not in enabled drivers build config 00:12:06.170 net/vdev_netvsc: not in enabled drivers build config 00:12:06.171 net/vhost: not in enabled drivers build config 00:12:06.171 net/virtio: not in enabled drivers build config 00:12:06.171 net/vmxnet3: not in enabled drivers build config 00:12:06.171 raw/*: missing internal dependency, "rawdev" 00:12:06.171 crypto/armv8: not in enabled drivers build config 00:12:06.171 crypto/bcmfs: not in enabled drivers build config 00:12:06.171 crypto/caam_jr: not in enabled drivers build config 00:12:06.171 crypto/ccp: not in enabled drivers build config 00:12:06.171 crypto/cnxk: not in enabled drivers build config 00:12:06.171 crypto/dpaa_sec: not in enabled drivers build config 00:12:06.171 crypto/dpaa2_sec: not in enabled drivers build config 00:12:06.171 crypto/ipsec_mb: not in enabled drivers build config 00:12:06.171 crypto/mlx5: not in enabled drivers build config 00:12:06.171 crypto/mvsam: not in enabled drivers build config 00:12:06.171 crypto/nitrox: not in enabled drivers build config 00:12:06.171 crypto/null: not in enabled drivers build config 00:12:06.171 crypto/octeontx: not in enabled drivers build config 00:12:06.171 crypto/openssl: not in enabled drivers build config 00:12:06.171 crypto/scheduler: not in enabled drivers build config 00:12:06.171 crypto/uadk: not in enabled drivers build config 00:12:06.171 crypto/virtio: not in enabled drivers build config 00:12:06.171 compress/isal: not in enabled drivers build config 00:12:06.171 compress/mlx5: not in enabled drivers build config 00:12:06.171 compress/nitrox: not in enabled drivers build config 00:12:06.171 compress/octeontx: not in enabled drivers build config 00:12:06.171 compress/zlib: not in enabled drivers build config 00:12:06.171 regex/*: missing internal dependency, "regexdev" 00:12:06.171 ml/*: missing internal dependency, "mldev" 00:12:06.171 vdpa/ifc: not in enabled drivers build config 00:12:06.171 vdpa/mlx5: not in enabled drivers build config 00:12:06.171 vdpa/nfp: not in enabled drivers build config 00:12:06.171 vdpa/sfc: not in enabled drivers build config 00:12:06.171 event/*: missing internal dependency, "eventdev" 00:12:06.171 baseband/*: missing internal dependency, "bbdev" 00:12:06.171 gpu/*: missing internal dependency, "gpudev" 00:12:06.171 00:12:06.171 00:12:06.171 Build targets in project: 85 00:12:06.171 00:12:06.171 DPDK 24.03.0 00:12:06.171 00:12:06.171 User defined options 00:12:06.171 buildtype : debug 00:12:06.171 default_library : shared 00:12:06.171 libdir : lib 00:12:06.171 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:06.171 b_sanitize : address 00:12:06.171 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:12:06.171 c_link_args : 00:12:06.171 cpu_instruction_set: native 00:12:06.171 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:06.171 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:06.171 enable_docs : false 00:12:06.171 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:06.171 enable_kmods : false 00:12:06.171 max_lcores : 128 00:12:06.171 tests : false 00:12:06.171 00:12:06.171 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:06.735 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:06.735 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:12:06.992 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:06.992 [3/268] Linking static target lib/librte_kvargs.a 00:12:06.992 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:06.992 [5/268] Linking static target lib/librte_log.a 00:12:06.992 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:07.558 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:07.558 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:07.558 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:07.558 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:07.816 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:08.073 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:08.073 [13/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:08.073 [14/268] Linking target lib/librte_log.so.24.1 00:12:08.073 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:08.073 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:08.073 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:08.073 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:08.330 [19/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:12:08.330 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:08.330 [21/268] Linking static target lib/librte_telemetry.a 00:12:08.330 [22/268] Linking target lib/librte_kvargs.so.24.1 00:12:08.588 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:12:08.845 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:08.845 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:09.103 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:09.103 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:09.103 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:09.103 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:09.103 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:09.361 [31/268] Linking target lib/librte_telemetry.so.24.1 00:12:09.361 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:09.361 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:09.361 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:12:09.618 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:09.618 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:09.618 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:09.876 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:09.876 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:10.135 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:10.135 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:10.135 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:10.135 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:10.135 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:10.392 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:10.392 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:10.650 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:10.650 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:10.908 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:12:11.166 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:11.166 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:11.166 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:11.166 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:11.423 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:11.681 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:11.939 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:11.939 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:11.939 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:12.196 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:12.196 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:12.196 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:12.196 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:12.196 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:12:12.196 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:12.454 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:12.711 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:12.969 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:13.227 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:12:13.227 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:12:13.227 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:12:13.227 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:12:13.227 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:13.485 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:13.485 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:13.485 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:12:13.485 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:12:13.743 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:12:13.743 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:12:14.001 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:12:14.001 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:14.259 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:12:14.259 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:14.543 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:12:14.543 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:12:14.543 [85/268] Linking static target lib/librte_eal.a 00:12:14.801 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:14.801 [87/268] Linking static target lib/librte_ring.a 00:12:15.059 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:15.059 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:15.318 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:15.318 [91/268] Linking static target lib/librte_rcu.a 00:12:15.318 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:15.318 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:15.318 [94/268] Linking static target lib/librte_mempool.a 00:12:15.318 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:15.318 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:15.575 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:15.833 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:15.833 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:15.833 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:16.091 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:16.091 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:16.348 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:16.348 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:16.348 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:16.606 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:16.606 [107/268] Linking static target lib/librte_mbuf.a 00:12:16.606 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:16.606 [109/268] Linking static target lib/librte_meter.a 00:12:16.606 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:16.864 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:16.864 [112/268] Linking static target lib/librte_net.a 00:12:16.864 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:17.121 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:17.121 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:17.121 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:17.378 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:17.378 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:17.637 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:17.637 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:17.895 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:18.154 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:18.412 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:18.412 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:18.412 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:18.669 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:18.669 [127/268] Linking static target lib/librte_pci.a 00:12:18.669 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:18.669 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:18.669 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:18.669 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:18.927 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:18.927 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:18.927 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:18.927 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:18.927 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:19.184 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:19.184 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:19.184 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:19.184 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:19.184 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:19.184 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:12:19.184 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:12:19.184 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:19.184 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:19.441 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:19.441 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:19.441 [148/268] Linking static target lib/librte_cmdline.a 00:12:19.699 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:12:19.957 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:12:19.957 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:12:19.957 [152/268] Linking static target lib/librte_timer.a 00:12:20.214 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:12:20.471 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:12:20.471 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:12:20.729 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:12:20.729 [157/268] Linking static target lib/librte_compressdev.a 00:12:20.729 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:12:20.729 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:12:20.729 [160/268] Linking static target lib/librte_hash.a 00:12:20.729 [161/268] Linking static target lib/librte_ethdev.a 00:12:20.729 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:12:20.729 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:12:20.995 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:12:20.995 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:12:21.257 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:21.257 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:12:21.525 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:12:21.525 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:12:21.525 [170/268] Linking static target lib/librte_dmadev.a 00:12:21.525 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:12:21.525 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:21.783 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:12:21.783 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:12:21.783 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:12:22.040 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:12:22.040 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:12:22.298 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:12:22.298 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:22.298 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:12:22.298 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:12:22.298 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:12:22.556 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:12:22.556 [184/268] Linking static target lib/librte_cryptodev.a 00:12:22.556 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:12:22.556 [186/268] Linking static target lib/librte_power.a 00:12:23.122 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:12:23.122 [188/268] Linking static target lib/librte_reorder.a 00:12:23.122 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:12:23.122 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:12:23.122 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:12:23.122 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:12:23.122 [193/268] Linking static target lib/librte_security.a 00:12:23.380 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:12:23.380 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:12:23.380 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:12:23.637 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:12:23.637 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:12:24.203 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:12:24.203 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:12:24.203 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:12:24.203 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:24.203 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:12:24.203 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:12:24.203 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:12:24.768 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:12:24.768 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:12:24.768 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:12:24.768 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:12:25.025 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:12:25.025 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:12:25.025 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:12:25.025 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:25.025 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:25.026 [215/268] Linking static target drivers/librte_bus_pci.a 00:12:25.285 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:12:25.285 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:12:25.285 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:12:25.285 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:25.285 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:25.285 [221/268] Linking static target drivers/librte_bus_vdev.a 00:12:25.285 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:12:25.543 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:25.543 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:25.543 [225/268] Linking static target drivers/librte_mempool_ring.a 00:12:25.543 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:25.800 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:25.800 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:12:26.057 [229/268] Linking target lib/librte_eal.so.24.1 00:12:26.057 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:12:26.315 [231/268] Linking target lib/librte_ring.so.24.1 00:12:26.315 [232/268] Linking target lib/librte_pci.so.24.1 00:12:26.315 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:12:26.315 [234/268] Linking target lib/librte_timer.so.24.1 00:12:26.315 [235/268] Linking target lib/librte_dmadev.so.24.1 00:12:26.315 [236/268] Linking target lib/librte_meter.so.24.1 00:12:26.315 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:12:26.315 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:12:26.315 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:12:26.315 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:12:26.315 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:12:26.315 [242/268] Linking target lib/librte_rcu.so.24.1 00:12:26.572 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:12:26.572 [244/268] Linking target lib/librte_mempool.so.24.1 00:12:26.572 [245/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:12:26.572 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:12:26.572 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:12:26.572 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:12:26.572 [249/268] Linking target lib/librte_mbuf.so.24.1 00:12:26.829 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:12:26.829 [251/268] Linking target lib/librte_compressdev.so.24.1 00:12:26.829 [252/268] Linking target lib/librte_reorder.so.24.1 00:12:26.829 [253/268] Linking target lib/librte_net.so.24.1 00:12:26.829 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:12:27.087 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:12:27.087 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:12:27.087 [257/268] Linking target lib/librte_hash.so.24.1 00:12:27.087 [258/268] Linking target lib/librte_cmdline.so.24.1 00:12:27.087 [259/268] Linking target lib/librte_security.so.24.1 00:12:27.344 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:12:27.910 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:27.910 [262/268] Linking target lib/librte_ethdev.so.24.1 00:12:28.168 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:12:28.168 [264/268] Linking target lib/librte_power.so.24.1 00:12:30.727 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:12:30.727 [266/268] Linking static target lib/librte_vhost.a 00:12:32.102 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:12:32.360 [268/268] Linking target lib/librte_vhost.so.24.1 00:12:32.360 INFO: autodetecting backend as ninja 00:12:32.360 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:12:33.294 CC lib/ut_mock/mock.o 00:12:33.294 CC lib/ut/ut.o 00:12:33.294 CC lib/log/log.o 00:12:33.294 CC lib/log/log_flags.o 00:12:33.294 CC lib/log/log_deprecated.o 00:12:33.552 LIB libspdk_ut_mock.a 00:12:33.552 LIB libspdk_ut.a 00:12:33.552 LIB libspdk_log.a 00:12:33.552 SO libspdk_ut_mock.so.6.0 00:12:33.552 SO libspdk_ut.so.2.0 00:12:33.552 SO libspdk_log.so.7.0 00:12:33.836 SYMLINK libspdk_ut_mock.so 00:12:33.836 SYMLINK libspdk_ut.so 00:12:33.836 SYMLINK libspdk_log.so 00:12:33.836 CC lib/dma/dma.o 00:12:33.836 CXX lib/trace_parser/trace.o 00:12:33.836 CC lib/ioat/ioat.o 00:12:33.836 CC lib/util/base64.o 00:12:33.836 CC lib/util/bit_array.o 00:12:33.836 CC lib/util/cpuset.o 00:12:33.836 CC lib/util/crc16.o 00:12:33.836 CC lib/util/crc32.o 00:12:33.836 CC lib/util/crc32c.o 00:12:34.094 CC lib/vfio_user/host/vfio_user_pci.o 00:12:34.094 CC lib/vfio_user/host/vfio_user.o 00:12:34.094 CC lib/util/crc32_ieee.o 00:12:34.094 LIB libspdk_dma.a 00:12:34.094 CC lib/util/crc64.o 00:12:34.094 SO libspdk_dma.so.4.0 00:12:34.094 CC lib/util/dif.o 00:12:34.094 CC lib/util/fd.o 00:12:34.352 SYMLINK libspdk_dma.so 00:12:34.352 CC lib/util/fd_group.o 00:12:34.352 CC lib/util/file.o 00:12:34.352 CC lib/util/hexlify.o 00:12:34.352 CC lib/util/iov.o 00:12:34.352 LIB libspdk_ioat.a 00:12:34.352 SO libspdk_ioat.so.7.0 00:12:34.352 CC lib/util/math.o 00:12:34.352 CC lib/util/net.o 00:12:34.352 SYMLINK libspdk_ioat.so 00:12:34.352 LIB libspdk_vfio_user.a 00:12:34.352 CC lib/util/pipe.o 00:12:34.352 CC lib/util/strerror_tls.o 00:12:34.352 SO libspdk_vfio_user.so.5.0 00:12:34.609 CC lib/util/string.o 00:12:34.609 CC lib/util/uuid.o 00:12:34.609 SYMLINK libspdk_vfio_user.so 00:12:34.609 CC lib/util/xor.o 00:12:34.610 CC lib/util/zipf.o 00:12:34.867 LIB libspdk_util.a 00:12:35.125 SO libspdk_util.so.10.0 00:12:35.125 SYMLINK libspdk_util.so 00:12:35.125 LIB libspdk_trace_parser.a 00:12:35.383 SO libspdk_trace_parser.so.5.0 00:12:35.383 SYMLINK libspdk_trace_parser.so 00:12:35.383 CC lib/conf/conf.o 00:12:35.383 CC lib/env_dpdk/env.o 00:12:35.383 CC lib/env_dpdk/pci.o 00:12:35.383 CC lib/vmd/vmd.o 00:12:35.383 CC lib/env_dpdk/memory.o 00:12:35.384 CC lib/env_dpdk/init.o 00:12:35.384 CC lib/rdma_provider/common.o 00:12:35.384 CC lib/idxd/idxd.o 00:12:35.384 CC lib/rdma_utils/rdma_utils.o 00:12:35.384 CC lib/json/json_parse.o 00:12:35.642 CC lib/rdma_provider/rdma_provider_verbs.o 00:12:35.642 LIB libspdk_conf.a 00:12:35.901 SO libspdk_conf.so.6.0 00:12:35.901 LIB libspdk_rdma_utils.a 00:12:35.901 CC lib/json/json_util.o 00:12:35.901 SO libspdk_rdma_utils.so.1.0 00:12:35.901 SYMLINK libspdk_conf.so 00:12:35.901 CC lib/idxd/idxd_user.o 00:12:35.901 CC lib/idxd/idxd_kernel.o 00:12:35.901 SYMLINK libspdk_rdma_utils.so 00:12:35.901 CC lib/json/json_write.o 00:12:35.901 LIB libspdk_rdma_provider.a 00:12:35.901 CC lib/vmd/led.o 00:12:35.901 SO libspdk_rdma_provider.so.6.0 00:12:36.159 SYMLINK libspdk_rdma_provider.so 00:12:36.159 CC lib/env_dpdk/threads.o 00:12:36.159 CC lib/env_dpdk/pci_ioat.o 00:12:36.159 CC lib/env_dpdk/pci_virtio.o 00:12:36.159 CC lib/env_dpdk/pci_vmd.o 00:12:36.159 CC lib/env_dpdk/pci_idxd.o 00:12:36.159 CC lib/env_dpdk/pci_event.o 00:12:36.159 CC lib/env_dpdk/sigbus_handler.o 00:12:36.159 CC lib/env_dpdk/pci_dpdk.o 00:12:36.159 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:36.159 LIB libspdk_json.a 00:12:36.417 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:36.417 LIB libspdk_vmd.a 00:12:36.417 LIB libspdk_idxd.a 00:12:36.417 SO libspdk_json.so.6.0 00:12:36.417 SO libspdk_vmd.so.6.0 00:12:36.417 SO libspdk_idxd.so.12.0 00:12:36.417 SYMLINK libspdk_json.so 00:12:36.417 SYMLINK libspdk_vmd.so 00:12:36.417 SYMLINK libspdk_idxd.so 00:12:36.675 CC lib/jsonrpc/jsonrpc_client.o 00:12:36.675 CC lib/jsonrpc/jsonrpc_server.o 00:12:36.675 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:36.675 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:36.933 LIB libspdk_jsonrpc.a 00:12:37.190 SO libspdk_jsonrpc.so.6.0 00:12:37.190 SYMLINK libspdk_jsonrpc.so 00:12:37.448 LIB libspdk_env_dpdk.a 00:12:37.448 CC lib/rpc/rpc.o 00:12:37.706 SO libspdk_env_dpdk.so.15.0 00:12:37.706 LIB libspdk_rpc.a 00:12:37.706 SYMLINK libspdk_env_dpdk.so 00:12:37.706 SO libspdk_rpc.so.6.0 00:12:37.964 SYMLINK libspdk_rpc.so 00:12:37.964 CC lib/notify/notify_rpc.o 00:12:37.964 CC lib/notify/notify.o 00:12:37.964 CC lib/keyring/keyring.o 00:12:37.964 CC lib/keyring/keyring_rpc.o 00:12:37.964 CC lib/trace/trace.o 00:12:38.221 CC lib/trace/trace_flags.o 00:12:38.221 CC lib/trace/trace_rpc.o 00:12:38.221 LIB libspdk_notify.a 00:12:38.221 SO libspdk_notify.so.6.0 00:12:38.478 LIB libspdk_keyring.a 00:12:38.478 SYMLINK libspdk_notify.so 00:12:38.478 SO libspdk_keyring.so.1.0 00:12:38.478 SYMLINK libspdk_keyring.so 00:12:38.478 LIB libspdk_trace.a 00:12:38.478 SO libspdk_trace.so.10.0 00:12:38.735 SYMLINK libspdk_trace.so 00:12:38.993 CC lib/sock/sock_rpc.o 00:12:38.993 CC lib/sock/sock.o 00:12:38.993 CC lib/thread/thread.o 00:12:38.993 CC lib/thread/iobuf.o 00:12:39.251 LIB libspdk_sock.a 00:12:39.509 SO libspdk_sock.so.10.0 00:12:39.509 SYMLINK libspdk_sock.so 00:12:39.767 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:39.767 CC lib/nvme/nvme_ctrlr.o 00:12:39.767 CC lib/nvme/nvme_fabric.o 00:12:39.767 CC lib/nvme/nvme_ns.o 00:12:39.767 CC lib/nvme/nvme_ns_cmd.o 00:12:39.767 CC lib/nvme/nvme_pcie_common.o 00:12:39.767 CC lib/nvme/nvme_pcie.o 00:12:39.767 CC lib/nvme/nvme_qpair.o 00:12:39.767 CC lib/nvme/nvme.o 00:12:40.700 CC lib/nvme/nvme_quirks.o 00:12:40.955 CC lib/nvme/nvme_transport.o 00:12:40.955 LIB libspdk_thread.a 00:12:40.955 CC lib/nvme/nvme_discovery.o 00:12:40.955 SO libspdk_thread.so.10.1 00:12:41.211 SYMLINK libspdk_thread.so 00:12:41.468 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:41.468 CC lib/accel/accel.o 00:12:41.468 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:41.725 CC lib/blob/blobstore.o 00:12:41.725 CC lib/init/json_config.o 00:12:41.725 CC lib/virtio/virtio.o 00:12:41.985 CC lib/virtio/virtio_vhost_user.o 00:12:42.248 CC lib/virtio/virtio_vfio_user.o 00:12:42.248 CC lib/virtio/virtio_pci.o 00:12:42.248 CC lib/blob/request.o 00:12:42.248 CC lib/nvme/nvme_tcp.o 00:12:42.248 CC lib/init/subsystem.o 00:12:42.506 CC lib/init/subsystem_rpc.o 00:12:42.506 CC lib/accel/accel_rpc.o 00:12:42.506 CC lib/accel/accel_sw.o 00:12:42.506 CC lib/nvme/nvme_opal.o 00:12:42.506 CC lib/init/rpc.o 00:12:42.764 LIB libspdk_virtio.a 00:12:42.764 SO libspdk_virtio.so.7.0 00:12:42.764 CC lib/blob/zeroes.o 00:12:42.764 SYMLINK libspdk_virtio.so 00:12:42.764 CC lib/blob/blob_bs_dev.o 00:12:42.764 CC lib/nvme/nvme_io_msg.o 00:12:42.764 LIB libspdk_init.a 00:12:43.021 SO libspdk_init.so.5.0 00:12:43.021 CC lib/nvme/nvme_poll_group.o 00:12:43.021 SYMLINK libspdk_init.so 00:12:43.021 CC lib/nvme/nvme_zns.o 00:12:43.278 CC lib/nvme/nvme_stubs.o 00:12:43.536 CC lib/event/app.o 00:12:43.794 CC lib/event/reactor.o 00:12:43.794 LIB libspdk_accel.a 00:12:43.794 SO libspdk_accel.so.16.0 00:12:43.794 CC lib/event/log_rpc.o 00:12:44.052 SYMLINK libspdk_accel.so 00:12:44.052 CC lib/event/app_rpc.o 00:12:44.052 CC lib/event/scheduler_static.o 00:12:44.052 CC lib/nvme/nvme_auth.o 00:12:44.309 CC lib/nvme/nvme_cuse.o 00:12:44.309 CC lib/nvme/nvme_rdma.o 00:12:44.309 CC lib/bdev/bdev.o 00:12:44.309 CC lib/bdev/bdev_rpc.o 00:12:44.309 CC lib/bdev/bdev_zone.o 00:12:44.309 CC lib/bdev/part.o 00:12:44.567 LIB libspdk_event.a 00:12:44.567 SO libspdk_event.so.14.0 00:12:44.824 SYMLINK libspdk_event.so 00:12:44.824 CC lib/bdev/scsi_nvme.o 00:12:46.749 LIB libspdk_nvme.a 00:12:46.749 SO libspdk_nvme.so.13.1 00:12:47.006 SYMLINK libspdk_nvme.so 00:12:47.264 LIB libspdk_blob.a 00:12:47.264 SO libspdk_blob.so.11.0 00:12:47.264 SYMLINK libspdk_blob.so 00:12:47.521 CC lib/lvol/lvol.o 00:12:47.521 CC lib/blobfs/blobfs.o 00:12:47.521 CC lib/blobfs/tree.o 00:12:48.457 LIB libspdk_bdev.a 00:12:48.457 SO libspdk_bdev.so.16.0 00:12:48.457 SYMLINK libspdk_bdev.so 00:12:48.715 CC lib/nvmf/ctrlr.o 00:12:48.715 CC lib/nvmf/ctrlr_bdev.o 00:12:48.715 CC lib/nvmf/ctrlr_discovery.o 00:12:48.715 CC lib/nvmf/subsystem.o 00:12:48.715 CC lib/scsi/dev.o 00:12:48.715 CC lib/ftl/ftl_core.o 00:12:48.715 CC lib/ublk/ublk.o 00:12:48.715 CC lib/nbd/nbd.o 00:12:48.973 LIB libspdk_lvol.a 00:12:48.973 SO libspdk_lvol.so.10.0 00:12:48.973 LIB libspdk_blobfs.a 00:12:48.973 SO libspdk_blobfs.so.10.0 00:12:48.973 SYMLINK libspdk_lvol.so 00:12:48.973 CC lib/nbd/nbd_rpc.o 00:12:49.231 SYMLINK libspdk_blobfs.so 00:12:49.231 CC lib/nvmf/nvmf.o 00:12:49.231 CC lib/scsi/lun.o 00:12:49.231 CC lib/ftl/ftl_init.o 00:12:49.231 CC lib/ftl/ftl_layout.o 00:12:49.489 LIB libspdk_nbd.a 00:12:49.489 CC lib/nvmf/nvmf_rpc.o 00:12:49.489 CC lib/scsi/port.o 00:12:49.489 SO libspdk_nbd.so.7.0 00:12:49.489 CC lib/ublk/ublk_rpc.o 00:12:49.746 SYMLINK libspdk_nbd.so 00:12:49.746 CC lib/scsi/scsi.o 00:12:49.746 CC lib/scsi/scsi_bdev.o 00:12:49.746 CC lib/scsi/scsi_pr.o 00:12:49.746 CC lib/ftl/ftl_debug.o 00:12:49.746 LIB libspdk_ublk.a 00:12:49.746 SO libspdk_ublk.so.3.0 00:12:50.004 CC lib/ftl/ftl_io.o 00:12:50.005 SYMLINK libspdk_ublk.so 00:12:50.005 CC lib/nvmf/transport.o 00:12:50.005 CC lib/nvmf/tcp.o 00:12:50.262 CC lib/nvmf/stubs.o 00:12:50.262 CC lib/nvmf/mdns_server.o 00:12:50.262 CC lib/scsi/scsi_rpc.o 00:12:50.520 CC lib/ftl/ftl_sb.o 00:12:50.520 CC lib/ftl/ftl_l2p.o 00:12:50.520 CC lib/scsi/task.o 00:12:50.778 CC lib/nvmf/rdma.o 00:12:50.778 CC lib/ftl/ftl_l2p_flat.o 00:12:50.778 CC lib/nvmf/auth.o 00:12:50.778 CC lib/ftl/ftl_nv_cache.o 00:12:50.778 LIB libspdk_scsi.a 00:12:51.035 SO libspdk_scsi.so.9.0 00:12:51.035 CC lib/ftl/ftl_band.o 00:12:51.035 CC lib/ftl/ftl_band_ops.o 00:12:51.035 SYMLINK libspdk_scsi.so 00:12:51.035 CC lib/ftl/ftl_writer.o 00:12:51.293 CC lib/ftl/ftl_rq.o 00:12:51.293 CC lib/ftl/ftl_reloc.o 00:12:51.293 CC lib/ftl/ftl_l2p_cache.o 00:12:51.551 CC lib/ftl/ftl_p2l.o 00:12:51.809 CC lib/ftl/mngt/ftl_mngt.o 00:12:51.809 CC lib/iscsi/conn.o 00:12:52.067 CC lib/iscsi/init_grp.o 00:12:52.067 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:12:52.325 CC lib/iscsi/iscsi.o 00:12:52.325 CC lib/iscsi/md5.o 00:12:52.325 CC lib/iscsi/param.o 00:12:52.583 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:12:52.583 CC lib/iscsi/portal_grp.o 00:12:52.583 CC lib/iscsi/tgt_node.o 00:12:52.583 CC lib/iscsi/iscsi_subsystem.o 00:12:52.583 CC lib/iscsi/iscsi_rpc.o 00:12:52.583 CC lib/ftl/mngt/ftl_mngt_startup.o 00:12:52.841 CC lib/ftl/mngt/ftl_mngt_md.o 00:12:53.099 CC lib/iscsi/task.o 00:12:53.099 CC lib/ftl/mngt/ftl_mngt_misc.o 00:12:53.099 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:12:53.357 CC lib/vhost/vhost.o 00:12:53.357 CC lib/ftl/mngt/ftl_mngt_band.o 00:12:53.357 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:12:53.615 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:12:53.615 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:12:53.615 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:12:53.615 CC lib/vhost/vhost_rpc.o 00:12:53.615 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:12:53.873 CC lib/vhost/vhost_scsi.o 00:12:53.873 CC lib/ftl/utils/ftl_conf.o 00:12:53.873 CC lib/ftl/utils/ftl_md.o 00:12:53.873 CC lib/vhost/vhost_blk.o 00:12:54.131 CC lib/ftl/utils/ftl_mempool.o 00:12:54.131 LIB libspdk_nvmf.a 00:12:54.389 CC lib/ftl/utils/ftl_bitmap.o 00:12:54.389 SO libspdk_nvmf.so.19.0 00:12:54.389 CC lib/ftl/utils/ftl_property.o 00:12:54.389 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:12:54.647 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:12:54.647 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:12:54.647 CC lib/vhost/rte_vhost_user.o 00:12:54.905 SYMLINK libspdk_nvmf.so 00:12:54.905 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:12:54.905 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:12:54.905 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:12:54.905 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:12:55.163 CC lib/ftl/upgrade/ftl_sb_v3.o 00:12:55.163 CC lib/ftl/upgrade/ftl_sb_v5.o 00:12:55.163 CC lib/ftl/nvc/ftl_nvc_dev.o 00:12:55.163 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:12:55.163 CC lib/ftl/base/ftl_base_dev.o 00:12:55.163 CC lib/ftl/base/ftl_base_bdev.o 00:12:55.421 CC lib/ftl/ftl_trace.o 00:12:55.421 LIB libspdk_iscsi.a 00:12:55.679 SO libspdk_iscsi.so.8.0 00:12:55.936 LIB libspdk_ftl.a 00:12:55.936 SYMLINK libspdk_iscsi.so 00:12:56.193 SO libspdk_ftl.so.9.0 00:12:56.757 LIB libspdk_vhost.a 00:12:56.757 SYMLINK libspdk_ftl.so 00:12:56.757 SO libspdk_vhost.so.8.0 00:12:56.757 SYMLINK libspdk_vhost.so 00:12:57.323 CC module/env_dpdk/env_dpdk_rpc.o 00:12:57.323 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:12:57.323 CC module/blob/bdev/blob_bdev.o 00:12:57.323 CC module/scheduler/gscheduler/gscheduler.o 00:12:57.323 CC module/scheduler/dynamic/scheduler_dynamic.o 00:12:57.323 CC module/keyring/linux/keyring.o 00:12:57.323 CC module/accel/ioat/accel_ioat.o 00:12:57.323 CC module/accel/error/accel_error.o 00:12:57.323 CC module/keyring/file/keyring.o 00:12:57.323 CC module/sock/posix/posix.o 00:12:57.581 LIB libspdk_env_dpdk_rpc.a 00:12:57.581 SO libspdk_env_dpdk_rpc.so.6.0 00:12:57.581 SYMLINK libspdk_env_dpdk_rpc.so 00:12:57.581 CC module/keyring/file/keyring_rpc.o 00:12:57.581 CC module/keyring/linux/keyring_rpc.o 00:12:57.581 LIB libspdk_scheduler_gscheduler.a 00:12:57.581 LIB libspdk_scheduler_dpdk_governor.a 00:12:57.581 CC module/accel/error/accel_error_rpc.o 00:12:57.581 SO libspdk_scheduler_gscheduler.so.4.0 00:12:57.581 SO libspdk_scheduler_dpdk_governor.so.4.0 00:12:57.581 LIB libspdk_scheduler_dynamic.a 00:12:57.839 SO libspdk_scheduler_dynamic.so.4.0 00:12:57.839 CC module/accel/ioat/accel_ioat_rpc.o 00:12:57.839 SYMLINK libspdk_scheduler_gscheduler.so 00:12:57.839 SYMLINK libspdk_scheduler_dpdk_governor.so 00:12:57.839 SYMLINK libspdk_scheduler_dynamic.so 00:12:57.839 LIB libspdk_accel_error.a 00:12:57.839 LIB libspdk_keyring_linux.a 00:12:57.839 LIB libspdk_keyring_file.a 00:12:57.839 SO libspdk_accel_error.so.2.0 00:12:57.839 CC module/accel/dsa/accel_dsa.o 00:12:57.839 LIB libspdk_blob_bdev.a 00:12:57.839 CC module/accel/dsa/accel_dsa_rpc.o 00:12:57.839 SO libspdk_keyring_linux.so.1.0 00:12:57.839 SO libspdk_keyring_file.so.1.0 00:12:57.839 SO libspdk_blob_bdev.so.11.0 00:12:58.098 LIB libspdk_accel_ioat.a 00:12:58.098 SYMLINK libspdk_keyring_linux.so 00:12:58.098 SYMLINK libspdk_accel_error.so 00:12:58.098 SYMLINK libspdk_keyring_file.so 00:12:58.098 SO libspdk_accel_ioat.so.6.0 00:12:58.098 SYMLINK libspdk_blob_bdev.so 00:12:58.098 CC module/accel/iaa/accel_iaa.o 00:12:58.098 CC module/accel/iaa/accel_iaa_rpc.o 00:12:58.098 SYMLINK libspdk_accel_ioat.so 00:12:58.357 LIB libspdk_accel_dsa.a 00:12:58.357 SO libspdk_accel_dsa.so.5.0 00:12:58.357 LIB libspdk_accel_iaa.a 00:12:58.357 CC module/bdev/delay/vbdev_delay.o 00:12:58.357 CC module/bdev/malloc/bdev_malloc.o 00:12:58.357 CC module/bdev/gpt/gpt.o 00:12:58.357 CC module/bdev/lvol/vbdev_lvol.o 00:12:58.357 CC module/blobfs/bdev/blobfs_bdev.o 00:12:58.357 CC module/bdev/error/vbdev_error.o 00:12:58.357 SO libspdk_accel_iaa.so.3.0 00:12:58.615 SYMLINK libspdk_accel_dsa.so 00:12:58.615 CC module/bdev/error/vbdev_error_rpc.o 00:12:58.615 CC module/bdev/null/bdev_null.o 00:12:58.615 SYMLINK libspdk_accel_iaa.so 00:12:58.615 CC module/bdev/null/bdev_null_rpc.o 00:12:58.873 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:12:58.873 CC module/bdev/gpt/vbdev_gpt.o 00:12:58.873 CC module/bdev/delay/vbdev_delay_rpc.o 00:12:58.873 LIB libspdk_bdev_error.a 00:12:58.873 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:12:58.873 SO libspdk_bdev_error.so.6.0 00:12:58.873 LIB libspdk_sock_posix.a 00:12:58.873 SO libspdk_sock_posix.so.6.0 00:12:59.131 SYMLINK libspdk_bdev_error.so 00:12:59.131 LIB libspdk_blobfs_bdev.a 00:12:59.131 LIB libspdk_bdev_null.a 00:12:59.131 SYMLINK libspdk_sock_posix.so 00:12:59.131 SO libspdk_blobfs_bdev.so.6.0 00:12:59.131 SO libspdk_bdev_null.so.6.0 00:12:59.131 LIB libspdk_bdev_delay.a 00:12:59.131 CC module/bdev/malloc/bdev_malloc_rpc.o 00:12:59.131 SO libspdk_bdev_delay.so.6.0 00:12:59.398 LIB libspdk_bdev_gpt.a 00:12:59.398 SYMLINK libspdk_blobfs_bdev.so 00:12:59.398 SYMLINK libspdk_bdev_null.so 00:12:59.398 CC module/bdev/nvme/bdev_nvme.o 00:12:59.398 SO libspdk_bdev_gpt.so.6.0 00:12:59.398 SYMLINK libspdk_bdev_delay.so 00:12:59.398 CC module/bdev/nvme/bdev_nvme_rpc.o 00:12:59.398 CC module/bdev/passthru/vbdev_passthru.o 00:12:59.398 CC module/bdev/raid/bdev_raid.o 00:12:59.398 SYMLINK libspdk_bdev_gpt.so 00:12:59.671 LIB libspdk_bdev_malloc.a 00:12:59.671 CC module/bdev/zone_block/vbdev_zone_block.o 00:12:59.671 CC module/bdev/split/vbdev_split.o 00:12:59.671 SO libspdk_bdev_malloc.so.6.0 00:12:59.671 LIB libspdk_bdev_lvol.a 00:12:59.671 CC module/bdev/xnvme/bdev_xnvme.o 00:12:59.671 SO libspdk_bdev_lvol.so.6.0 00:12:59.671 CC module/bdev/aio/bdev_aio.o 00:12:59.671 SYMLINK libspdk_bdev_malloc.so 00:12:59.929 CC module/bdev/split/vbdev_split_rpc.o 00:12:59.929 SYMLINK libspdk_bdev_lvol.so 00:12:59.929 CC module/bdev/aio/bdev_aio_rpc.o 00:12:59.929 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:12:59.929 LIB libspdk_bdev_split.a 00:13:00.187 CC module/bdev/raid/bdev_raid_rpc.o 00:13:00.187 SO libspdk_bdev_split.so.6.0 00:13:00.187 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:00.187 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:13:00.187 LIB libspdk_bdev_passthru.a 00:13:00.187 SYMLINK libspdk_bdev_split.so 00:13:00.187 SO libspdk_bdev_passthru.so.6.0 00:13:00.187 CC module/bdev/nvme/nvme_rpc.o 00:13:00.445 SYMLINK libspdk_bdev_passthru.so 00:13:00.445 LIB libspdk_bdev_xnvme.a 00:13:00.445 CC module/bdev/ftl/bdev_ftl.o 00:13:00.445 LIB libspdk_bdev_zone_block.a 00:13:00.445 LIB libspdk_bdev_aio.a 00:13:00.445 SO libspdk_bdev_xnvme.so.3.0 00:13:00.445 SO libspdk_bdev_zone_block.so.6.0 00:13:00.445 CC module/bdev/iscsi/bdev_iscsi.o 00:13:00.445 CC module/bdev/raid/bdev_raid_sb.o 00:13:00.445 SO libspdk_bdev_aio.so.6.0 00:13:00.703 SYMLINK libspdk_bdev_xnvme.so 00:13:00.703 CC module/bdev/raid/raid0.o 00:13:00.703 SYMLINK libspdk_bdev_zone_block.so 00:13:00.703 CC module/bdev/raid/raid1.o 00:13:00.703 SYMLINK libspdk_bdev_aio.so 00:13:00.703 CC module/bdev/raid/concat.o 00:13:00.703 CC module/bdev/nvme/bdev_mdns_client.o 00:13:00.703 CC module/bdev/nvme/vbdev_opal.o 00:13:00.961 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:00.961 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:00.961 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:01.220 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:01.220 LIB libspdk_bdev_raid.a 00:13:01.220 LIB libspdk_bdev_iscsi.a 00:13:01.479 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:01.479 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:01.479 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:01.479 SO libspdk_bdev_raid.so.6.0 00:13:01.479 SO libspdk_bdev_iscsi.so.6.0 00:13:01.479 LIB libspdk_bdev_ftl.a 00:13:01.479 SYMLINK libspdk_bdev_iscsi.so 00:13:01.479 SYMLINK libspdk_bdev_raid.so 00:13:01.479 SO libspdk_bdev_ftl.so.6.0 00:13:01.479 SYMLINK libspdk_bdev_ftl.so 00:13:02.045 LIB libspdk_bdev_virtio.a 00:13:02.045 SO libspdk_bdev_virtio.so.6.0 00:13:02.304 SYMLINK libspdk_bdev_virtio.so 00:13:03.237 LIB libspdk_bdev_nvme.a 00:13:03.237 SO libspdk_bdev_nvme.so.7.0 00:13:03.496 SYMLINK libspdk_bdev_nvme.so 00:13:04.060 CC module/event/subsystems/iobuf/iobuf.o 00:13:04.060 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:04.060 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:04.060 CC module/event/subsystems/scheduler/scheduler.o 00:13:04.060 CC module/event/subsystems/vmd/vmd.o 00:13:04.060 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:04.060 CC module/event/subsystems/keyring/keyring.o 00:13:04.060 CC module/event/subsystems/sock/sock.o 00:13:04.318 LIB libspdk_event_keyring.a 00:13:04.318 LIB libspdk_event_sock.a 00:13:04.318 LIB libspdk_event_vhost_blk.a 00:13:04.318 SO libspdk_event_keyring.so.1.0 00:13:04.318 SO libspdk_event_vhost_blk.so.3.0 00:13:04.318 SO libspdk_event_sock.so.5.0 00:13:04.318 LIB libspdk_event_scheduler.a 00:13:04.318 SYMLINK libspdk_event_vhost_blk.so 00:13:04.318 LIB libspdk_event_iobuf.a 00:13:04.318 SO libspdk_event_scheduler.so.4.0 00:13:04.318 LIB libspdk_event_vmd.a 00:13:04.318 SYMLINK libspdk_event_keyring.so 00:13:04.318 SYMLINK libspdk_event_sock.so 00:13:04.318 SO libspdk_event_vmd.so.6.0 00:13:04.318 SO libspdk_event_iobuf.so.3.0 00:13:04.318 SYMLINK libspdk_event_scheduler.so 00:13:04.576 SYMLINK libspdk_event_vmd.so 00:13:04.576 SYMLINK libspdk_event_iobuf.so 00:13:04.835 CC module/event/subsystems/accel/accel.o 00:13:04.835 LIB libspdk_event_accel.a 00:13:04.835 SO libspdk_event_accel.so.6.0 00:13:05.099 SYMLINK libspdk_event_accel.so 00:13:05.357 CC module/event/subsystems/bdev/bdev.o 00:13:05.615 LIB libspdk_event_bdev.a 00:13:05.615 SO libspdk_event_bdev.so.6.0 00:13:05.615 SYMLINK libspdk_event_bdev.so 00:13:05.874 CC module/event/subsystems/ublk/ublk.o 00:13:05.874 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:05.874 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:05.874 CC module/event/subsystems/nbd/nbd.o 00:13:05.874 CC module/event/subsystems/scsi/scsi.o 00:13:05.874 LIB libspdk_event_nbd.a 00:13:06.132 SO libspdk_event_nbd.so.6.0 00:13:06.132 LIB libspdk_event_ublk.a 00:13:06.132 SO libspdk_event_ublk.so.3.0 00:13:06.132 LIB libspdk_event_scsi.a 00:13:06.132 SYMLINK libspdk_event_nbd.so 00:13:06.132 SO libspdk_event_scsi.so.6.0 00:13:06.132 SYMLINK libspdk_event_ublk.so 00:13:06.132 LIB libspdk_event_nvmf.a 00:13:06.132 SYMLINK libspdk_event_scsi.so 00:13:06.132 SO libspdk_event_nvmf.so.6.0 00:13:06.390 SYMLINK libspdk_event_nvmf.so 00:13:06.390 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:06.390 CC module/event/subsystems/iscsi/iscsi.o 00:13:06.646 LIB libspdk_event_vhost_scsi.a 00:13:06.646 LIB libspdk_event_iscsi.a 00:13:06.646 SO libspdk_event_vhost_scsi.so.3.0 00:13:06.646 SO libspdk_event_iscsi.so.6.0 00:13:06.646 SYMLINK libspdk_event_vhost_scsi.so 00:13:06.646 SYMLINK libspdk_event_iscsi.so 00:13:06.904 SO libspdk.so.6.0 00:13:06.904 SYMLINK libspdk.so 00:13:07.163 CC app/spdk_lspci/spdk_lspci.o 00:13:07.163 CXX app/trace/trace.o 00:13:07.163 CC app/trace_record/trace_record.o 00:13:07.163 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:07.163 CC app/nvmf_tgt/nvmf_main.o 00:13:07.163 CC app/spdk_tgt/spdk_tgt.o 00:13:07.163 CC examples/ioat/perf/perf.o 00:13:07.163 CC app/iscsi_tgt/iscsi_tgt.o 00:13:07.163 CC examples/util/zipf/zipf.o 00:13:07.421 CC test/thread/poller_perf/poller_perf.o 00:13:07.421 LINK spdk_lspci 00:13:07.421 LINK spdk_trace_record 00:13:07.421 LINK nvmf_tgt 00:13:07.421 LINK interrupt_tgt 00:13:07.421 LINK spdk_tgt 00:13:07.421 LINK poller_perf 00:13:07.421 LINK iscsi_tgt 00:13:07.679 LINK ioat_perf 00:13:07.679 LINK zipf 00:13:07.679 LINK spdk_trace 00:13:07.679 CC app/spdk_nvme_perf/perf.o 00:13:07.938 CC examples/ioat/verify/verify.o 00:13:07.938 CC app/spdk_nvme_identify/identify.o 00:13:07.938 CC app/spdk_nvme_discover/discovery_aer.o 00:13:07.938 CC app/spdk_top/spdk_top.o 00:13:07.938 CC app/spdk_dd/spdk_dd.o 00:13:07.938 CC test/dma/test_dma/test_dma.o 00:13:07.938 TEST_HEADER include/spdk/accel.h 00:13:07.938 TEST_HEADER include/spdk/accel_module.h 00:13:07.938 TEST_HEADER include/spdk/assert.h 00:13:07.938 CC test/app/bdev_svc/bdev_svc.o 00:13:07.938 TEST_HEADER include/spdk/barrier.h 00:13:07.938 TEST_HEADER include/spdk/base64.h 00:13:07.938 TEST_HEADER include/spdk/bdev.h 00:13:07.938 TEST_HEADER include/spdk/bdev_module.h 00:13:07.938 TEST_HEADER include/spdk/bdev_zone.h 00:13:07.938 TEST_HEADER include/spdk/bit_array.h 00:13:07.938 TEST_HEADER include/spdk/bit_pool.h 00:13:07.938 TEST_HEADER include/spdk/blob_bdev.h 00:13:07.938 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:07.938 TEST_HEADER include/spdk/blobfs.h 00:13:07.938 TEST_HEADER include/spdk/blob.h 00:13:07.938 TEST_HEADER include/spdk/conf.h 00:13:07.938 TEST_HEADER include/spdk/config.h 00:13:07.938 TEST_HEADER include/spdk/cpuset.h 00:13:07.938 TEST_HEADER include/spdk/crc16.h 00:13:07.938 TEST_HEADER include/spdk/crc32.h 00:13:07.938 TEST_HEADER include/spdk/crc64.h 00:13:07.938 TEST_HEADER include/spdk/dif.h 00:13:07.938 TEST_HEADER include/spdk/dma.h 00:13:07.938 TEST_HEADER include/spdk/endian.h 00:13:08.197 TEST_HEADER include/spdk/env_dpdk.h 00:13:08.197 TEST_HEADER include/spdk/env.h 00:13:08.197 TEST_HEADER include/spdk/event.h 00:13:08.197 TEST_HEADER include/spdk/fd_group.h 00:13:08.197 TEST_HEADER include/spdk/fd.h 00:13:08.197 TEST_HEADER include/spdk/file.h 00:13:08.197 TEST_HEADER include/spdk/ftl.h 00:13:08.197 TEST_HEADER include/spdk/gpt_spec.h 00:13:08.197 TEST_HEADER include/spdk/hexlify.h 00:13:08.197 TEST_HEADER include/spdk/histogram_data.h 00:13:08.197 CC examples/thread/thread/thread_ex.o 00:13:08.197 TEST_HEADER include/spdk/idxd.h 00:13:08.197 TEST_HEADER include/spdk/idxd_spec.h 00:13:08.197 TEST_HEADER include/spdk/init.h 00:13:08.197 TEST_HEADER include/spdk/ioat.h 00:13:08.197 TEST_HEADER include/spdk/ioat_spec.h 00:13:08.197 TEST_HEADER include/spdk/iscsi_spec.h 00:13:08.197 TEST_HEADER include/spdk/json.h 00:13:08.197 TEST_HEADER include/spdk/jsonrpc.h 00:13:08.197 TEST_HEADER include/spdk/keyring.h 00:13:08.197 TEST_HEADER include/spdk/keyring_module.h 00:13:08.197 TEST_HEADER include/spdk/likely.h 00:13:08.197 TEST_HEADER include/spdk/log.h 00:13:08.197 TEST_HEADER include/spdk/lvol.h 00:13:08.197 TEST_HEADER include/spdk/memory.h 00:13:08.197 TEST_HEADER include/spdk/mmio.h 00:13:08.197 TEST_HEADER include/spdk/nbd.h 00:13:08.197 TEST_HEADER include/spdk/net.h 00:13:08.197 TEST_HEADER include/spdk/notify.h 00:13:08.197 LINK spdk_nvme_discover 00:13:08.197 TEST_HEADER include/spdk/nvme.h 00:13:08.197 TEST_HEADER include/spdk/nvme_intel.h 00:13:08.197 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:08.197 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:08.197 TEST_HEADER include/spdk/nvme_spec.h 00:13:08.197 TEST_HEADER include/spdk/nvme_zns.h 00:13:08.197 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:08.197 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:08.197 TEST_HEADER include/spdk/nvmf.h 00:13:08.197 TEST_HEADER include/spdk/nvmf_spec.h 00:13:08.197 TEST_HEADER include/spdk/nvmf_transport.h 00:13:08.197 LINK verify 00:13:08.197 TEST_HEADER include/spdk/opal.h 00:13:08.197 TEST_HEADER include/spdk/opal_spec.h 00:13:08.197 TEST_HEADER include/spdk/pci_ids.h 00:13:08.197 TEST_HEADER include/spdk/pipe.h 00:13:08.197 TEST_HEADER include/spdk/queue.h 00:13:08.197 TEST_HEADER include/spdk/reduce.h 00:13:08.197 TEST_HEADER include/spdk/rpc.h 00:13:08.197 TEST_HEADER include/spdk/scheduler.h 00:13:08.197 TEST_HEADER include/spdk/scsi.h 00:13:08.197 TEST_HEADER include/spdk/scsi_spec.h 00:13:08.197 TEST_HEADER include/spdk/sock.h 00:13:08.197 TEST_HEADER include/spdk/stdinc.h 00:13:08.197 TEST_HEADER include/spdk/string.h 00:13:08.197 TEST_HEADER include/spdk/thread.h 00:13:08.197 TEST_HEADER include/spdk/trace.h 00:13:08.197 TEST_HEADER include/spdk/trace_parser.h 00:13:08.197 TEST_HEADER include/spdk/tree.h 00:13:08.197 TEST_HEADER include/spdk/ublk.h 00:13:08.197 TEST_HEADER include/spdk/util.h 00:13:08.197 TEST_HEADER include/spdk/uuid.h 00:13:08.197 TEST_HEADER include/spdk/version.h 00:13:08.197 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:08.197 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:08.197 TEST_HEADER include/spdk/vhost.h 00:13:08.197 TEST_HEADER include/spdk/vmd.h 00:13:08.197 LINK bdev_svc 00:13:08.197 TEST_HEADER include/spdk/xor.h 00:13:08.197 TEST_HEADER include/spdk/zipf.h 00:13:08.197 CXX test/cpp_headers/accel.o 00:13:08.455 CXX test/cpp_headers/accel_module.o 00:13:08.455 LINK thread 00:13:08.455 LINK spdk_dd 00:13:08.455 CC app/fio/nvme/fio_plugin.o 00:13:08.455 LINK test_dma 00:13:08.713 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:08.713 CXX test/cpp_headers/assert.o 00:13:08.713 CC test/env/mem_callbacks/mem_callbacks.o 00:13:08.970 CXX test/cpp_headers/barrier.o 00:13:08.970 LINK spdk_nvme_perf 00:13:08.970 CC examples/sock/hello_world/hello_sock.o 00:13:08.970 CC examples/vmd/lsvmd/lsvmd.o 00:13:08.970 CC test/event/event_perf/event_perf.o 00:13:09.228 LINK spdk_nvme_identify 00:13:09.228 CXX test/cpp_headers/base64.o 00:13:09.228 LINK lsvmd 00:13:09.228 CC test/app/histogram_perf/histogram_perf.o 00:13:09.228 LINK spdk_top 00:13:09.486 LINK event_perf 00:13:09.486 LINK spdk_nvme 00:13:09.486 LINK hello_sock 00:13:09.486 CXX test/cpp_headers/bdev.o 00:13:09.486 LINK nvme_fuzz 00:13:09.486 LINK mem_callbacks 00:13:09.486 LINK histogram_perf 00:13:09.486 CC app/fio/bdev/fio_plugin.o 00:13:09.745 CC examples/vmd/led/led.o 00:13:09.745 CC test/app/jsoncat/jsoncat.o 00:13:09.745 CXX test/cpp_headers/bdev_module.o 00:13:09.745 CC test/app/stub/stub.o 00:13:09.745 CC test/event/reactor/reactor.o 00:13:09.745 LINK led 00:13:09.745 CC test/env/vtophys/vtophys.o 00:13:09.745 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:10.003 LINK jsoncat 00:13:10.003 CC test/env/memory/memory_ut.o 00:13:10.003 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:10.003 CXX test/cpp_headers/bdev_zone.o 00:13:10.003 LINK vtophys 00:13:10.003 LINK reactor 00:13:10.003 LINK stub 00:13:10.003 LINK env_dpdk_post_init 00:13:10.261 LINK spdk_bdev 00:13:10.261 CXX test/cpp_headers/bit_array.o 00:13:10.261 CXX test/cpp_headers/bit_pool.o 00:13:10.261 CC app/vhost/vhost.o 00:13:10.261 CC examples/idxd/perf/perf.o 00:13:10.261 CC test/env/pci/pci_ut.o 00:13:10.261 CC test/event/reactor_perf/reactor_perf.o 00:13:10.519 CXX test/cpp_headers/blob_bdev.o 00:13:10.519 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:10.519 LINK vhost 00:13:10.519 LINK reactor_perf 00:13:10.519 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:10.519 CC test/nvme/aer/aer.o 00:13:10.519 CC examples/accel/perf/accel_perf.o 00:13:10.776 CXX test/cpp_headers/blobfs_bdev.o 00:13:10.776 LINK idxd_perf 00:13:10.776 CC test/event/app_repeat/app_repeat.o 00:13:10.776 CC test/rpc_client/rpc_client_test.o 00:13:10.776 LINK pci_ut 00:13:10.776 CXX test/cpp_headers/blobfs.o 00:13:11.034 CXX test/cpp_headers/blob.o 00:13:11.034 LINK app_repeat 00:13:11.034 LINK aer 00:13:11.034 LINK rpc_client_test 00:13:11.034 CXX test/cpp_headers/conf.o 00:13:11.293 CXX test/cpp_headers/config.o 00:13:11.293 LINK vhost_fuzz 00:13:11.293 CXX test/cpp_headers/cpuset.o 00:13:11.293 CC test/accel/dif/dif.o 00:13:11.293 LINK accel_perf 00:13:11.293 CC test/event/scheduler/scheduler.o 00:13:11.293 CC test/nvme/reset/reset.o 00:13:11.551 CXX test/cpp_headers/crc16.o 00:13:11.551 LINK memory_ut 00:13:11.551 CC examples/nvme/hello_world/hello_world.o 00:13:11.551 CC examples/blob/hello_world/hello_blob.o 00:13:11.551 CC test/blobfs/mkfs/mkfs.o 00:13:11.551 CXX test/cpp_headers/crc32.o 00:13:11.551 LINK scheduler 00:13:11.808 CC examples/blob/cli/blobcli.o 00:13:11.808 LINK reset 00:13:11.808 LINK mkfs 00:13:11.808 LINK hello_world 00:13:11.808 CXX test/cpp_headers/crc64.o 00:13:11.808 LINK dif 00:13:11.808 LINK hello_blob 00:13:12.066 CXX test/cpp_headers/dif.o 00:13:12.066 CC test/nvme/sgl/sgl.o 00:13:12.066 CC test/nvme/e2edp/nvme_dp.o 00:13:12.066 CC examples/bdev/hello_world/hello_bdev.o 00:13:12.066 CC examples/nvme/reconnect/reconnect.o 00:13:12.066 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:12.066 CXX test/cpp_headers/dma.o 00:13:12.066 CC examples/nvme/arbitration/arbitration.o 00:13:12.324 CC examples/nvme/hotplug/hotplug.o 00:13:12.324 LINK iscsi_fuzz 00:13:12.324 LINK hello_bdev 00:13:12.324 LINK blobcli 00:13:12.324 LINK sgl 00:13:12.324 LINK nvme_dp 00:13:12.324 CXX test/cpp_headers/endian.o 00:13:12.582 CXX test/cpp_headers/env_dpdk.o 00:13:12.582 LINK hotplug 00:13:12.582 LINK reconnect 00:13:12.582 LINK arbitration 00:13:12.582 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:12.582 CC examples/bdev/bdevperf/bdevperf.o 00:13:12.582 CC test/nvme/overhead/overhead.o 00:13:12.582 CC examples/nvme/abort/abort.o 00:13:12.582 CXX test/cpp_headers/env.o 00:13:12.841 CXX test/cpp_headers/event.o 00:13:12.841 LINK nvme_manage 00:13:12.841 CXX test/cpp_headers/fd_group.o 00:13:12.841 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:12.841 LINK cmb_copy 00:13:12.841 CC test/lvol/esnap/esnap.o 00:13:12.841 CXX test/cpp_headers/fd.o 00:13:13.098 LINK overhead 00:13:13.098 CXX test/cpp_headers/file.o 00:13:13.098 LINK pmr_persistence 00:13:13.098 CXX test/cpp_headers/ftl.o 00:13:13.098 CC test/nvme/err_injection/err_injection.o 00:13:13.098 CC test/bdev/bdevio/bdevio.o 00:13:13.098 LINK abort 00:13:13.098 CXX test/cpp_headers/gpt_spec.o 00:13:13.356 CXX test/cpp_headers/hexlify.o 00:13:13.356 CC test/nvme/startup/startup.o 00:13:13.356 CXX test/cpp_headers/histogram_data.o 00:13:13.356 CC test/nvme/reserve/reserve.o 00:13:13.356 CXX test/cpp_headers/idxd.o 00:13:13.614 LINK err_injection 00:13:13.614 LINK startup 00:13:13.614 CC test/nvme/simple_copy/simple_copy.o 00:13:13.614 CC test/nvme/connect_stress/connect_stress.o 00:13:13.614 CXX test/cpp_headers/idxd_spec.o 00:13:13.614 CXX test/cpp_headers/init.o 00:13:13.614 LINK reserve 00:13:13.614 LINK bdevio 00:13:13.872 CXX test/cpp_headers/ioat.o 00:13:13.872 CXX test/cpp_headers/ioat_spec.o 00:13:13.872 CC test/nvme/boot_partition/boot_partition.o 00:13:13.872 LINK bdevperf 00:13:13.872 LINK simple_copy 00:13:13.872 LINK connect_stress 00:13:13.872 CXX test/cpp_headers/iscsi_spec.o 00:13:13.872 LINK boot_partition 00:13:14.130 CC test/nvme/compliance/nvme_compliance.o 00:13:14.130 CXX test/cpp_headers/json.o 00:13:14.130 CC test/nvme/fused_ordering/fused_ordering.o 00:13:14.130 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:14.130 CC test/nvme/fdp/fdp.o 00:13:14.130 CC test/nvme/cuse/cuse.o 00:13:14.130 CXX test/cpp_headers/jsonrpc.o 00:13:14.388 CXX test/cpp_headers/keyring.o 00:13:14.388 CC examples/nvmf/nvmf/nvmf.o 00:13:14.388 LINK doorbell_aers 00:13:14.388 LINK fused_ordering 00:13:14.388 CXX test/cpp_headers/keyring_module.o 00:13:14.388 CXX test/cpp_headers/likely.o 00:13:14.388 CXX test/cpp_headers/log.o 00:13:14.646 CXX test/cpp_headers/lvol.o 00:13:14.646 CXX test/cpp_headers/memory.o 00:13:14.646 CXX test/cpp_headers/mmio.o 00:13:14.646 LINK fdp 00:13:14.646 CXX test/cpp_headers/nbd.o 00:13:14.646 CXX test/cpp_headers/net.o 00:13:14.646 CXX test/cpp_headers/notify.o 00:13:14.647 CXX test/cpp_headers/nvme.o 00:13:14.647 LINK nvme_compliance 00:13:14.647 CXX test/cpp_headers/nvme_intel.o 00:13:14.904 CXX test/cpp_headers/nvme_ocssd.o 00:13:14.904 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:14.904 LINK nvmf 00:13:14.904 CXX test/cpp_headers/nvme_spec.o 00:13:14.904 CXX test/cpp_headers/nvme_zns.o 00:13:14.904 CXX test/cpp_headers/nvmf_cmd.o 00:13:14.904 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:14.904 CXX test/cpp_headers/nvmf.o 00:13:14.904 CXX test/cpp_headers/nvmf_spec.o 00:13:15.162 CXX test/cpp_headers/nvmf_transport.o 00:13:15.162 CXX test/cpp_headers/opal.o 00:13:15.162 CXX test/cpp_headers/opal_spec.o 00:13:15.162 CXX test/cpp_headers/pci_ids.o 00:13:15.162 CXX test/cpp_headers/pipe.o 00:13:15.162 CXX test/cpp_headers/queue.o 00:13:15.420 CXX test/cpp_headers/reduce.o 00:13:15.420 CXX test/cpp_headers/rpc.o 00:13:15.420 CXX test/cpp_headers/scheduler.o 00:13:15.420 CXX test/cpp_headers/scsi.o 00:13:15.420 CXX test/cpp_headers/scsi_spec.o 00:13:15.420 CXX test/cpp_headers/sock.o 00:13:15.420 CXX test/cpp_headers/stdinc.o 00:13:15.420 CXX test/cpp_headers/string.o 00:13:15.420 CXX test/cpp_headers/thread.o 00:13:15.678 CXX test/cpp_headers/trace.o 00:13:15.678 CXX test/cpp_headers/trace_parser.o 00:13:15.678 CXX test/cpp_headers/tree.o 00:13:15.678 CXX test/cpp_headers/ublk.o 00:13:15.678 CXX test/cpp_headers/util.o 00:13:15.678 CXX test/cpp_headers/uuid.o 00:13:15.678 CXX test/cpp_headers/version.o 00:13:15.678 CXX test/cpp_headers/vfio_user_pci.o 00:13:15.678 CXX test/cpp_headers/vfio_user_spec.o 00:13:15.678 CXX test/cpp_headers/vhost.o 00:13:15.678 CXX test/cpp_headers/vmd.o 00:13:15.678 CXX test/cpp_headers/xor.o 00:13:15.937 CXX test/cpp_headers/zipf.o 00:13:16.195 LINK cuse 00:13:20.414 LINK esnap 00:13:20.670 00:13:20.670 real 1m32.374s 00:13:20.670 user 9m25.396s 00:13:20.670 sys 1m51.318s 00:13:20.670 11:42:17 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:13:20.671 11:42:17 make -- common/autotest_common.sh@10 -- $ set +x 00:13:20.671 ************************************ 00:13:20.671 END TEST make 00:13:20.671 ************************************ 00:13:20.671 11:42:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:13:20.671 11:42:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:13:20.671 11:42:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:13:20.671 11:42:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:20.671 11:42:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:13:20.671 11:42:17 -- pm/common@44 -- $ pid=5241 00:13:20.671 11:42:17 -- pm/common@50 -- $ kill -TERM 5241 00:13:20.671 11:42:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:20.671 11:42:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:13:20.671 11:42:17 -- pm/common@44 -- $ pid=5243 00:13:20.671 11:42:17 -- pm/common@50 -- $ kill -TERM 5243 00:13:20.928 11:42:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.928 11:42:17 -- nvmf/common.sh@7 -- # uname -s 00:13:20.928 11:42:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.928 11:42:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.928 11:42:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.928 11:42:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.928 11:42:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.928 11:42:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.928 11:42:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.928 11:42:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.928 11:42:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.928 11:42:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.928 11:42:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:49af732a-113d-4feb-846e-4f875fd14a22 00:13:20.928 11:42:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=49af732a-113d-4feb-846e-4f875fd14a22 00:13:20.928 11:42:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.928 11:42:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.928 11:42:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:20.928 11:42:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.928 11:42:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.928 11:42:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.928 11:42:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.928 11:42:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.928 11:42:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.928 11:42:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.928 11:42:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.928 11:42:17 -- paths/export.sh@5 -- # export PATH 00:13:20.928 11:42:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.928 11:42:17 -- nvmf/common.sh@47 -- # : 0 00:13:20.928 11:42:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.928 11:42:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.928 11:42:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.928 11:42:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.928 11:42:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.928 11:42:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.928 11:42:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.928 11:42:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.928 11:42:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:13:20.928 11:42:17 -- spdk/autotest.sh@32 -- # uname -s 00:13:20.928 11:42:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:13:20.928 11:42:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:13:20.928 11:42:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:20.928 11:42:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:13:20.928 11:42:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:20.928 11:42:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:13:20.928 11:42:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:13:20.928 11:42:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:13:20.928 11:42:17 -- spdk/autotest.sh@48 -- # udevadm_pid=53925 00:13:20.928 11:42:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:13:20.928 11:42:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:13:20.928 11:42:17 -- pm/common@17 -- # local monitor 00:13:20.928 11:42:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:20.928 11:42:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:20.928 11:42:17 -- pm/common@25 -- # sleep 1 00:13:20.928 11:42:17 -- pm/common@21 -- # date +%s 00:13:20.928 11:42:17 -- pm/common@21 -- # date +%s 00:13:20.928 11:42:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721907737 00:13:20.928 11:42:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721907737 00:13:20.928 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721907737_collect-cpu-load.pm.log 00:13:20.928 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721907737_collect-vmstat.pm.log 00:13:21.861 11:42:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:13:21.861 11:42:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:13:21.861 11:42:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.861 11:42:18 -- common/autotest_common.sh@10 -- # set +x 00:13:21.861 11:42:18 -- spdk/autotest.sh@59 -- # create_test_list 00:13:21.861 11:42:18 -- common/autotest_common.sh@748 -- # xtrace_disable 00:13:21.861 11:42:18 -- common/autotest_common.sh@10 -- # set +x 00:13:22.119 11:42:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:13:22.119 11:42:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:13:22.119 11:42:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:13:22.119 11:42:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:13:22.119 11:42:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:13:22.119 11:42:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:13:22.119 11:42:18 -- common/autotest_common.sh@1455 -- # uname 00:13:22.119 11:42:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:13:22.119 11:42:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:13:22.119 11:42:18 -- common/autotest_common.sh@1475 -- # uname 00:13:22.119 11:42:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:13:22.119 11:42:18 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:13:22.119 11:42:18 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:13:22.119 11:42:18 -- spdk/autotest.sh@72 -- # hash lcov 00:13:22.119 11:42:18 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:13:22.119 11:42:18 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:13:22.119 --rc lcov_branch_coverage=1 00:13:22.119 --rc lcov_function_coverage=1 00:13:22.119 --rc genhtml_branch_coverage=1 00:13:22.119 --rc genhtml_function_coverage=1 00:13:22.119 --rc genhtml_legend=1 00:13:22.119 --rc geninfo_all_blocks=1 00:13:22.119 ' 00:13:22.119 11:42:18 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:13:22.119 --rc lcov_branch_coverage=1 00:13:22.119 --rc lcov_function_coverage=1 00:13:22.119 --rc genhtml_branch_coverage=1 00:13:22.119 --rc genhtml_function_coverage=1 00:13:22.119 --rc genhtml_legend=1 00:13:22.119 --rc geninfo_all_blocks=1 00:13:22.119 ' 00:13:22.119 11:42:18 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:13:22.119 --rc lcov_branch_coverage=1 00:13:22.119 --rc lcov_function_coverage=1 00:13:22.119 --rc genhtml_branch_coverage=1 00:13:22.119 --rc genhtml_function_coverage=1 00:13:22.119 --rc genhtml_legend=1 00:13:22.119 --rc geninfo_all_blocks=1 00:13:22.119 --no-external' 00:13:22.119 11:42:18 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:13:22.119 --rc lcov_branch_coverage=1 00:13:22.119 --rc lcov_function_coverage=1 00:13:22.119 --rc genhtml_branch_coverage=1 00:13:22.119 --rc genhtml_function_coverage=1 00:13:22.119 --rc genhtml_legend=1 00:13:22.119 --rc geninfo_all_blocks=1 00:13:22.119 --no-external' 00:13:22.119 11:42:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:13:22.119 lcov: LCOV version 1.14 00:13:22.119 11:42:19 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:13:40.189 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:13:40.189 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:13:55.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:13:55.070 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:13:55.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:13:55.071 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:13:55.072 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:13:55.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:13:56.972 11:42:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:13:56.972 11:42:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.972 11:42:53 -- common/autotest_common.sh@10 -- # set +x 00:13:56.972 11:42:53 -- spdk/autotest.sh@91 -- # rm -f 00:13:56.972 11:42:53 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:57.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.177 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:13:58.177 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:13:58.177 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:13:58.177 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:13:58.177 11:42:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:13:58.177 11:42:54 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:13:58.177 11:42:54 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:13:58.177 11:42:54 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:13:58.177 11:42:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:58.177 11:42:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:58.177 11:42:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:58.177 11:42:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:58.177 11:42:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:13:58.177 11:42:54 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:13:58.177 11:42:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:58.177 11:42:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:13:58.177 11:42:54 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:13:58.177 11:42:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:58.177 11:42:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:13:58.177 11:42:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:58.177 11:42:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:58.177 11:42:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:13:58.178 11:42:54 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:13:58.178 11:42:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:58.178 11:42:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:58.178 11:42:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:13:58.178 11:42:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.178 11:42:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.178 11:42:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:13:58.178 11:42:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:13:58.178 11:42:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:58.178 No valid GPT data, bailing 00:13:58.178 11:42:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:58.178 11:42:55 -- scripts/common.sh@391 -- # pt= 00:13:58.178 11:42:55 -- scripts/common.sh@392 -- # return 1 00:13:58.178 11:42:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:13:58.178 1+0 records in 00:13:58.178 1+0 records out 00:13:58.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123826 s, 84.7 MB/s 00:13:58.178 11:42:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.178 11:42:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.178 11:42:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:13:58.178 11:42:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:13:58.178 11:42:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:13:58.178 No valid GPT data, bailing 00:13:58.178 11:42:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:58.178 11:42:55 -- scripts/common.sh@391 -- # pt= 00:13:58.178 11:42:55 -- scripts/common.sh@392 -- # return 1 00:13:58.178 11:42:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:13:58.178 1+0 records in 00:13:58.178 1+0 records out 00:13:58.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00352605 s, 297 MB/s 00:13:58.178 11:42:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.178 11:42:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.178 11:42:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:13:58.178 11:42:55 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:13:58.178 11:42:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:13:58.436 No valid GPT data, bailing 00:13:58.436 11:42:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:13:58.436 11:42:55 -- scripts/common.sh@391 -- # pt= 00:13:58.436 11:42:55 -- scripts/common.sh@392 -- # return 1 00:13:58.436 11:42:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:13:58.436 1+0 records in 00:13:58.436 1+0 records out 00:13:58.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00359738 s, 291 MB/s 00:13:58.436 11:42:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.436 11:42:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.436 11:42:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:13:58.436 11:42:55 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:13:58.436 11:42:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:13:58.436 No valid GPT data, bailing 00:13:58.436 11:42:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:13:58.436 11:42:55 -- scripts/common.sh@391 -- # pt= 00:13:58.436 11:42:55 -- scripts/common.sh@392 -- # return 1 00:13:58.436 11:42:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:13:58.436 1+0 records in 00:13:58.436 1+0 records out 00:13:58.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397598 s, 264 MB/s 00:13:58.436 11:42:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.436 11:42:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.436 11:42:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:13:58.436 11:42:55 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:13:58.436 11:42:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:13:58.436 No valid GPT data, bailing 00:13:58.436 11:42:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:13:58.436 11:42:55 -- scripts/common.sh@391 -- # pt= 00:13:58.436 11:42:55 -- scripts/common.sh@392 -- # return 1 00:13:58.436 11:42:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:13:58.437 1+0 records in 00:13:58.437 1+0 records out 00:13:58.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00294707 s, 356 MB/s 00:13:58.437 11:42:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.437 11:42:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.437 11:42:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:13:58.437 11:42:55 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:13:58.437 11:42:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:13:58.437 No valid GPT data, bailing 00:13:58.437 11:42:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:13:58.437 11:42:55 -- scripts/common.sh@391 -- # pt= 00:13:58.437 11:42:55 -- scripts/common.sh@392 -- # return 1 00:13:58.437 11:42:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:13:58.437 1+0 records in 00:13:58.437 1+0 records out 00:13:58.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00311211 s, 337 MB/s 00:13:58.437 11:42:55 -- spdk/autotest.sh@118 -- # sync 00:13:58.695 11:42:55 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:13:58.695 11:42:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:13:58.695 11:42:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:00.595 11:42:57 -- spdk/autotest.sh@124 -- # uname -s 00:14:00.595 11:42:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:14:00.595 11:42:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:00.595 11:42:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:00.595 11:42:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.595 11:42:57 -- common/autotest_common.sh@10 -- # set +x 00:14:00.595 ************************************ 00:14:00.595 START TEST setup.sh 00:14:00.595 ************************************ 00:14:00.595 11:42:57 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:00.595 * Looking for test storage... 00:14:00.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:00.595 11:42:57 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:14:00.595 11:42:57 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:14:00.595 11:42:57 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:00.595 11:42:57 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:00.595 11:42:57 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.595 11:42:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:00.595 ************************************ 00:14:00.595 START TEST acl 00:14:00.595 ************************************ 00:14:00.595 11:42:57 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:00.595 * Looking for test storage... 00:14:00.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:00.595 11:42:57 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:14:00.595 11:42:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:00.595 11:42:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:00.595 11:42:57 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:00.595 11:42:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:00.595 11:42:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:00.595 11:42:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:00.596 11:42:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:00.596 11:42:57 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:14:00.596 11:42:57 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:14:00.596 11:42:57 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:14:00.596 11:42:57 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:14:00.596 11:42:57 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:14:00.596 11:42:57 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:00.596 11:42:57 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:01.530 11:42:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:14:01.530 11:42:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:14:01.530 11:42:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:01.530 11:42:58 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:14:01.530 11:42:58 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:14:01.530 11:42:58 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:01.787 11:42:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:14:01.788 11:42:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:01.788 11:42:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.353 Hugepages 00:14:02.353 node hugesize free / total 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.353 00:14:02.353 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:02.353 11:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:14:02.611 11:42:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:14:02.611 11:42:59 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:02.612 11:42:59 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.612 11:42:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:02.612 ************************************ 00:14:02.612 START TEST denied 00:14:02.612 ************************************ 00:14:02.612 11:42:59 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:14:02.612 11:42:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:14:02.612 11:42:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:14:02.612 11:42:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:14:02.612 11:42:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:14:02.612 11:42:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:03.985 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:03.985 11:43:00 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:10.546 00:14:10.546 real 0m7.008s 00:14:10.546 user 0m0.743s 00:14:10.546 sys 0m1.282s 00:14:10.546 11:43:06 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.546 ************************************ 00:14:10.546 11:43:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:14:10.546 END TEST denied 00:14:10.547 ************************************ 00:14:10.547 11:43:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:14:10.547 11:43:06 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:10.547 11:43:06 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.547 11:43:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:10.547 ************************************ 00:14:10.547 START TEST allowed 00:14:10.547 ************************************ 00:14:10.547 11:43:06 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:14:10.547 11:43:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:14:10.547 11:43:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:14:10.547 11:43:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:14:10.547 11:43:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:14:10.547 11:43:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:10.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:10.805 11:43:07 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:11.738 00:14:11.738 real 0m1.990s 00:14:11.738 user 0m0.946s 00:14:11.738 sys 0m1.036s 00:14:11.738 11:43:08 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.738 ************************************ 00:14:11.738 END TEST allowed 00:14:11.738 ************************************ 00:14:11.738 11:43:08 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 ************************************ 00:14:11.738 END TEST acl 00:14:11.738 ************************************ 00:14:11.738 00:14:11.738 real 0m11.399s 00:14:11.738 user 0m2.867s 00:14:11.738 sys 0m3.564s 00:14:11.738 11:43:08 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.738 11:43:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 11:43:08 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:11.738 11:43:08 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:11.738 11:43:08 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.738 11:43:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 ************************************ 00:14:11.738 START TEST hugepages 00:14:11.738 ************************************ 00:14:11.738 11:43:08 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:11.738 * Looking for test storage... 00:14:11.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5805056 kB' 'MemAvailable: 7406908 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 444372 kB' 'Inactive: 1474996 kB' 'Active(anon): 112372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1474996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 103532 kB' 'Mapped: 48548 kB' 'Shmem: 10512 kB' 'KReclaimable: 63588 kB' 'Slab: 136592 kB' 'SReclaimable: 63588 kB' 'SUnreclaim: 73004 kB' 'KernelStack: 6524 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.738 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.997 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:11.998 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:11.999 11:43:08 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:14:11.999 11:43:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:11.999 11:43:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.999 11:43:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:11.999 ************************************ 00:14:11.999 START TEST default_setup 00:14:11.999 ************************************ 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:14:11.999 11:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:12.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:12.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:12.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:12.823 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.087 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934780 kB' 'MemAvailable: 9536364 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 462228 kB' 'Inactive: 1475012 kB' 'Active(anon): 130228 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475012 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 121228 kB' 'Mapped: 48728 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135580 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72556 kB' 'KernelStack: 6496 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.088 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934780 kB' 'MemAvailable: 9536376 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 462028 kB' 'Inactive: 1475024 kB' 'Active(anon): 130028 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121092 kB' 'Mapped: 48552 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135592 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72568 kB' 'KernelStack: 6448 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.089 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.090 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935048 kB' 'MemAvailable: 9536644 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461932 kB' 'Inactive: 1475024 kB' 'Active(anon): 129932 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121036 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135600 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72576 kB' 'KernelStack: 6480 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.091 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.092 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:14:13.093 nr_hugepages=1024 00:14:13.093 resv_hugepages=0 00:14:13.093 surplus_hugepages=0 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:13.093 anon_hugepages=0 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.093 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935236 kB' 'MemAvailable: 9536832 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461620 kB' 'Inactive: 1475024 kB' 'Active(anon): 129620 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120720 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135600 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72576 kB' 'KernelStack: 6464 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.094 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.095 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935236 kB' 'MemUsed: 4306740 kB' 'SwapCached: 0 kB' 'Active: 461920 kB' 'Inactive: 1475024 kB' 'Active(anon): 129920 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1817500 kB' 'Mapped: 48612 kB' 'AnonPages: 121032 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63024 kB' 'Slab: 135600 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.096 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:13.366 node0=1024 expecting 1024 00:14:13.366 ************************************ 00:14:13.366 END TEST default_setup 00:14:13.366 ************************************ 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:13.366 00:14:13.366 real 0m1.307s 00:14:13.366 user 0m0.577s 00:14:13.366 sys 0m0.665s 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.366 11:43:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:14:13.366 11:43:10 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:14:13.366 11:43:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:13.366 11:43:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.366 11:43:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:13.366 ************************************ 00:14:13.366 START TEST per_node_1G_alloc 00:14:13.366 ************************************ 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:14:13.366 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:13.367 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:13.639 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:13.639 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:13.639 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:13.639 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:13.639 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8978564 kB' 'MemAvailable: 10580164 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 462116 kB' 'Inactive: 1475028 kB' 'Active(anon): 130116 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121224 kB' 'Mapped: 48740 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135612 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72588 kB' 'KernelStack: 6480 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.639 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.640 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8978316 kB' 'MemAvailable: 10579916 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461920 kB' 'Inactive: 1475028 kB' 'Active(anon): 129920 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121052 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135592 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72568 kB' 'KernelStack: 6464 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.641 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.642 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.643 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.904 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8978316 kB' 'MemAvailable: 10579916 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461944 kB' 'Inactive: 1475028 kB' 'Active(anon): 129944 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121052 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135580 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72556 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.905 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.906 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:13.907 nr_hugepages=512 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:13.907 resv_hugepages=0 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:13.907 surplus_hugepages=0 00:14:13.907 anon_hugepages=0 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8978064 kB' 'MemAvailable: 10579664 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461904 kB' 'Inactive: 1475028 kB' 'Active(anon): 129904 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121040 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135576 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72552 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.907 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.908 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.909 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8979404 kB' 'MemUsed: 3262572 kB' 'SwapCached: 0 kB' 'Active: 461632 kB' 'Inactive: 1475028 kB' 'Active(anon): 129632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1817500 kB' 'Mapped: 48616 kB' 'AnonPages: 120816 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63024 kB' 'Slab: 135576 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.910 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:13.911 node0=512 expecting 512 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:13.911 00:14:13.911 real 0m0.627s 00:14:13.911 user 0m0.284s 00:14:13.911 sys 0m0.349s 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.911 11:43:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:13.911 ************************************ 00:14:13.911 END TEST per_node_1G_alloc 00:14:13.911 ************************************ 00:14:13.911 11:43:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:14:13.911 11:43:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:13.911 11:43:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.911 11:43:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:13.911 ************************************ 00:14:13.911 START TEST even_2G_alloc 00:14:13.911 ************************************ 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:13.911 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:14:13.912 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:14:13.912 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:14:13.912 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:13.912 11:43:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:14.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:14.432 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.432 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.432 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.432 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7930172 kB' 'MemAvailable: 9531772 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 462124 kB' 'Inactive: 1475028 kB' 'Active(anon): 130124 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121228 kB' 'Mapped: 48704 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135616 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72592 kB' 'KernelStack: 6536 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.432 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:14.433 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7929920 kB' 'MemAvailable: 9531520 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461900 kB' 'Inactive: 1475028 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121088 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135624 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72600 kB' 'KernelStack: 6480 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.434 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:14.435 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7929920 kB' 'MemAvailable: 9531520 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461696 kB' 'Inactive: 1475028 kB' 'Active(anon): 129696 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121044 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135624 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72600 kB' 'KernelStack: 6480 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.436 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.437 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:14.438 nr_hugepages=1024 00:14:14.438 resv_hugepages=0 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:14.438 surplus_hugepages=0 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:14.438 anon_hugepages=0 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7929920 kB' 'MemAvailable: 9531520 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 461652 kB' 'Inactive: 1475028 kB' 'Active(anon): 129652 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121036 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63024 kB' 'Slab: 135624 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72600 kB' 'KernelStack: 6480 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.438 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:14:14.439 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7929920 kB' 'MemUsed: 4312056 kB' 'SwapCached: 0 kB' 'Active: 461956 kB' 'Inactive: 1475028 kB' 'Active(anon): 129956 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'FilePages: 1817500 kB' 'Mapped: 48616 kB' 'AnonPages: 121168 kB' 'Shmem: 10472 kB' 'KernelStack: 6496 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63024 kB' 'Slab: 135624 kB' 'SReclaimable: 63024 kB' 'SUnreclaim: 72600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.440 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.441 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.698 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:14.699 node0=1024 expecting 1024 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:14.699 00:14:14.699 real 0m0.639s 00:14:14.699 user 0m0.316s 00:14:14.699 sys 0m0.337s 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.699 ************************************ 00:14:14.699 END TEST even_2G_alloc 00:14:14.699 11:43:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:14.699 ************************************ 00:14:14.699 11:43:11 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:14:14.699 11:43:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:14.699 11:43:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.699 11:43:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:14.699 ************************************ 00:14:14.699 START TEST odd_alloc 00:14:14.699 ************************************ 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:14.699 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:14.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:14.959 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.959 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.959 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.959 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7928412 kB' 'MemAvailable: 9530020 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 462000 kB' 'Inactive: 1475036 kB' 'Active(anon): 130000 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 121060 kB' 'Mapped: 48640 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135636 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72616 kB' 'KernelStack: 6472 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.959 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.960 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7928160 kB' 'MemAvailable: 9529768 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 462024 kB' 'Inactive: 1475036 kB' 'Active(anon): 130024 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121124 kB' 'Mapped: 48556 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135556 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72536 kB' 'KernelStack: 6480 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.961 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:14.962 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.223 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7928160 kB' 'MemAvailable: 9529768 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 461712 kB' 'Inactive: 1475036 kB' 'Active(anon): 129712 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120808 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135528 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72508 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.224 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.225 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:15.226 nr_hugepages=1025 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:14:15.226 resv_hugepages=0 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:15.226 surplus_hugepages=0 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:15.226 anon_hugepages=0 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7928160 kB' 'MemAvailable: 9529768 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 461984 kB' 'Inactive: 1475036 kB' 'Active(anon): 129984 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121088 kB' 'Mapped: 48616 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135528 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72508 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.226 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.227 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7928160 kB' 'MemUsed: 4313816 kB' 'SwapCached: 0 kB' 'Active: 461672 kB' 'Inactive: 1475036 kB' 'Active(anon): 129672 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1817508 kB' 'Mapped: 48616 kB' 'AnonPages: 121068 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63020 kB' 'Slab: 135524 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.228 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.229 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:15.230 node0=1025 expecting 1025 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:14:15.230 00:14:15.230 real 0m0.607s 00:14:15.230 user 0m0.312s 00:14:15.230 sys 0m0.334s 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.230 11:43:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:15.230 ************************************ 00:14:15.230 END TEST odd_alloc 00:14:15.230 ************************************ 00:14:15.230 11:43:12 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:14:15.230 11:43:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:15.230 11:43:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.230 11:43:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:15.230 ************************************ 00:14:15.230 START TEST custom_alloc 00:14:15.230 ************************************ 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:15.230 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:15.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:15.750 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:15.750 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:15.750 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:15.750 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:15.750 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972996 kB' 'MemAvailable: 10574604 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 462092 kB' 'Inactive: 1475036 kB' 'Active(anon): 130092 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121156 kB' 'Mapped: 48660 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135468 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72448 kB' 'KernelStack: 6456 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.751 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972996 kB' 'MemAvailable: 10574604 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 462048 kB' 'Inactive: 1475036 kB' 'Active(anon): 130048 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 121112 kB' 'Mapped: 48660 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135464 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72444 kB' 'KernelStack: 6424 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.752 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.753 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972996 kB' 'MemAvailable: 10574604 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 461664 kB' 'Inactive: 1475036 kB' 'Active(anon): 129664 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120748 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135488 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72468 kB' 'KernelStack: 6448 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.754 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.755 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:15.756 nr_hugepages=512 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:15.756 resv_hugepages=0 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:15.756 surplus_hugepages=0 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:15.756 anon_hugepages=0 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.756 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972996 kB' 'MemAvailable: 10574604 kB' 'Buffers: 2436 kB' 'Cached: 1815072 kB' 'SwapCached: 0 kB' 'Active: 461612 kB' 'Inactive: 1475036 kB' 'Active(anon): 129612 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120988 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135488 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72468 kB' 'KernelStack: 6432 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.757 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972996 kB' 'MemUsed: 3268980 kB' 'SwapCached: 0 kB' 'Active: 461704 kB' 'Inactive: 1475036 kB' 'Active(anon): 129704 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1817508 kB' 'Mapped: 48648 kB' 'AnonPages: 120824 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63020 kB' 'Slab: 135488 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.758 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.759 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:15.760 node0=512 expecting 512 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:15.760 00:14:15.760 real 0m0.574s 00:14:15.760 user 0m0.299s 00:14:15.760 sys 0m0.312s 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.760 ************************************ 00:14:15.760 END TEST custom_alloc 00:14:15.760 11:43:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:15.760 ************************************ 00:14:15.760 11:43:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:14:15.760 11:43:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:15.760 11:43:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.760 11:43:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:15.760 ************************************ 00:14:15.760 START TEST no_shrink_alloc 00:14:15.760 ************************************ 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:15.760 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:15.761 11:43:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:16.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:16.331 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.331 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.331 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.331 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922992 kB' 'MemAvailable: 9524596 kB' 'Buffers: 2436 kB' 'Cached: 1815068 kB' 'SwapCached: 0 kB' 'Active: 462280 kB' 'Inactive: 1475032 kB' 'Active(anon): 130280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 121088 kB' 'Mapped: 48808 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135472 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72452 kB' 'KernelStack: 6504 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.331 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:16.332 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922992 kB' 'MemAvailable: 9524596 kB' 'Buffers: 2436 kB' 'Cached: 1815068 kB' 'SwapCached: 0 kB' 'Active: 462076 kB' 'Inactive: 1475032 kB' 'Active(anon): 130076 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 121176 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135500 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72480 kB' 'KernelStack: 6448 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.333 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.334 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922992 kB' 'MemAvailable: 9524596 kB' 'Buffers: 2436 kB' 'Cached: 1815068 kB' 'SwapCached: 0 kB' 'Active: 461940 kB' 'Inactive: 1475032 kB' 'Active(anon): 129940 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 121072 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135532 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72512 kB' 'KernelStack: 6464 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.335 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:16.336 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:16.336 nr_hugepages=1024 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:16.337 resv_hugepages=0 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:16.337 surplus_hugepages=0 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:16.337 anon_hugepages=0 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922992 kB' 'MemAvailable: 9524596 kB' 'Buffers: 2436 kB' 'Cached: 1815068 kB' 'SwapCached: 0 kB' 'Active: 462000 kB' 'Inactive: 1475032 kB' 'Active(anon): 130000 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 121104 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135532 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72512 kB' 'KernelStack: 6480 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.337 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:14:16.338 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:16.596 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922992 kB' 'MemUsed: 4318984 kB' 'SwapCached: 0 kB' 'Active: 461980 kB' 'Inactive: 1475032 kB' 'Active(anon): 129980 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'FilePages: 1817504 kB' 'Mapped: 48648 kB' 'AnonPages: 121108 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63020 kB' 'Slab: 135528 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:16.597 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:16.598 node0=1024 expecting 1024 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:16.598 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:16.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:16.857 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.857 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.857 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.857 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.857 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7918128 kB' 'MemAvailable: 9519728 kB' 'Buffers: 2436 kB' 'Cached: 1815064 kB' 'SwapCached: 0 kB' 'Active: 462764 kB' 'Inactive: 1475028 kB' 'Active(anon): 130764 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 121944 kB' 'Mapped: 48628 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135524 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72504 kB' 'KernelStack: 6600 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.857 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7918380 kB' 'MemAvailable: 9519984 kB' 'Buffers: 2436 kB' 'Cached: 1815068 kB' 'SwapCached: 0 kB' 'Active: 462184 kB' 'Inactive: 1475032 kB' 'Active(anon): 130184 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 121324 kB' 'Mapped: 48832 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135512 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72492 kB' 'KernelStack: 6520 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.858 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:16.859 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.121 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7917876 kB' 'MemAvailable: 9519480 kB' 'Buffers: 2436 kB' 'Cached: 1815068 kB' 'SwapCached: 0 kB' 'Active: 459408 kB' 'Inactive: 1475032 kB' 'Active(anon): 127408 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 118532 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63020 kB' 'Slab: 135436 kB' 'SReclaimable: 63020 kB' 'SUnreclaim: 72416 kB' 'KernelStack: 6408 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.122 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.123 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:17.124 nr_hugepages=1024 00:14:17.124 resv_hugepages=0 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:17.124 surplus_hugepages=0 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:17.124 anon_hugepages=0 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7917876 kB' 'MemAvailable: 9519476 kB' 'Buffers: 2436 kB' 'Cached: 1815068 kB' 'SwapCached: 0 kB' 'Active: 459296 kB' 'Inactive: 1475032 kB' 'Active(anon): 127296 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 118432 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63016 kB' 'Slab: 135432 kB' 'SReclaimable: 63016 kB' 'SUnreclaim: 72416 kB' 'KernelStack: 6416 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 5087232 kB' 'DirectMap1G: 9437184 kB' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.124 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.125 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7917624 kB' 'MemUsed: 4324352 kB' 'SwapCached: 0 kB' 'Active: 459268 kB' 'Inactive: 1475032 kB' 'Active(anon): 127268 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1475032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 1817504 kB' 'Mapped: 47880 kB' 'AnonPages: 118436 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63016 kB' 'Slab: 135436 kB' 'SReclaimable: 63016 kB' 'SUnreclaim: 72420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:17.126 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.127 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:17.128 node0=1024 expecting 1024 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:17.128 00:14:17.128 real 0m1.233s 00:14:17.128 user 0m0.609s 00:14:17.128 sys 0m0.698s 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.128 11:43:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:17.128 ************************************ 00:14:17.128 END TEST no_shrink_alloc 00:14:17.128 ************************************ 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:17.128 11:43:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:17.128 00:14:17.128 real 0m5.356s 00:14:17.128 user 0m2.548s 00:14:17.128 sys 0m2.901s 00:14:17.128 11:43:14 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.128 ************************************ 00:14:17.128 END TEST hugepages 00:14:17.128 ************************************ 00:14:17.128 11:43:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:17.128 11:43:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:17.128 11:43:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:17.128 11:43:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.128 11:43:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:17.128 ************************************ 00:14:17.128 START TEST driver 00:14:17.128 ************************************ 00:14:17.128 11:43:14 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:17.387 * Looking for test storage... 00:14:17.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:17.387 11:43:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:14:17.387 11:43:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:17.387 11:43:14 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:23.963 11:43:19 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:14:23.963 11:43:19 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:23.963 11:43:19 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.963 11:43:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:14:23.963 ************************************ 00:14:23.963 START TEST guess_driver 00:14:23.963 ************************************ 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:14:23.963 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:14:23.963 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:14:23.964 Looking for driver=uio_pci_generic 00:14:23.964 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:14:23.964 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:23.964 11:43:19 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:14:23.964 11:43:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:14:23.964 11:43:19 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:23.964 11:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:14:23.964 11:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:14:23.964 11:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:23.964 11:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:23.964 11:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:23.964 11:43:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:24.223 11:43:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:30.780 00:14:30.780 real 0m7.021s 00:14:30.780 user 0m0.741s 00:14:30.780 sys 0m1.336s 00:14:30.780 11:43:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.780 ************************************ 00:14:30.780 END TEST guess_driver 00:14:30.780 11:43:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 ************************************ 00:14:30.781 00:14:30.781 real 0m12.952s 00:14:30.781 user 0m1.033s 00:14:30.781 sys 0m2.079s 00:14:30.781 11:43:27 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.781 11:43:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 ************************************ 00:14:30.781 END TEST driver 00:14:30.781 ************************************ 00:14:30.781 11:43:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:30.781 11:43:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:30.781 11:43:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.781 11:43:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 ************************************ 00:14:30.781 START TEST devices 00:14:30.781 ************************************ 00:14:30.781 11:43:27 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:30.781 * Looking for test storage... 00:14:30.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:30.781 11:43:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:14:30.781 11:43:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:14:30.781 11:43:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:30.781 11:43:27 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:31.347 11:43:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:31.347 11:43:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:31.348 11:43:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:14:31.348 No valid GPT data, bailing 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:14:31.348 11:43:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:31.348 11:43:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:31.348 11:43:28 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:14:31.348 No valid GPT data, bailing 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:31.348 11:43:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:14:31.348 11:43:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:14:31.348 11:43:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:14:31.348 11:43:28 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:14:31.348 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:14:31.606 11:43:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:14:31.607 No valid GPT data, bailing 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:14:31.607 No valid GPT data, bailing 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:14:31.607 No valid GPT data, bailing 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:14:31.607 11:43:28 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:14:31.607 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:14:31.607 11:43:28 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:14:31.866 No valid GPT data, bailing 00:14:31.866 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:14:31.866 11:43:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:31.866 11:43:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:31.866 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:14:31.866 11:43:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:14:31.866 11:43:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:14:31.866 11:43:28 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:14:31.866 11:43:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:14:31.866 11:43:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:14:31.866 11:43:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:14:31.866 11:43:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:14:31.866 11:43:28 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:31.866 11:43:28 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.866 11:43:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:31.866 ************************************ 00:14:31.866 START TEST nvme_mount 00:14:31.866 ************************************ 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:31.866 11:43:28 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:14:32.802 Creating new GPT entries in memory. 00:14:32.802 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:32.802 other utilities. 00:14:32.802 11:43:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:14:32.802 11:43:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:32.802 11:43:29 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:32.802 11:43:29 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:32.802 11:43:29 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:33.738 Creating new GPT entries in memory. 00:14:33.738 The operation has completed successfully. 00:14:33.738 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:33.738 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:33.738 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59643 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:34.001 11:43:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:34.001 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.001 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:14:34.001 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:34.001 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.001 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.001 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.264 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.264 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.264 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.264 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.264 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.264 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.522 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.522 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.780 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:34.780 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:34.781 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:34.781 11:43:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:35.039 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:35.039 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:35.039 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:35.039 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:35.039 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:14:35.039 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:14:35.039 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:35.039 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:14:35.039 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.298 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.564 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.564 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.564 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.564 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.564 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.564 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.822 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.822 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:36.081 11:43:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:36.339 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:36.339 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:14:36.339 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:36.339 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:36.339 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:36.339 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:36.597 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:36.597 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:36.597 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:36.597 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:36.597 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:36.597 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:36.855 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:36.855 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:37.114 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:37.114 00:14:37.114 real 0m5.233s 00:14:37.114 user 0m1.482s 00:14:37.114 sys 0m1.472s 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.114 11:43:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:14:37.114 ************************************ 00:14:37.114 END TEST nvme_mount 00:14:37.114 ************************************ 00:14:37.114 11:43:33 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:14:37.114 11:43:33 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:37.114 11:43:33 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.114 11:43:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:37.114 ************************************ 00:14:37.114 START TEST dm_mount 00:14:37.114 ************************************ 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:37.114 11:43:33 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:14:38.048 Creating new GPT entries in memory. 00:14:38.048 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:38.048 other utilities. 00:14:38.049 11:43:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:14:38.049 11:43:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:38.049 11:43:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:38.049 11:43:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:38.049 11:43:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:39.428 Creating new GPT entries in memory. 00:14:39.428 The operation has completed successfully. 00:14:39.428 11:43:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:39.428 11:43:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:39.428 11:43:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:39.428 11:43:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:39.428 11:43:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:14:40.364 The operation has completed successfully. 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60271 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.364 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.661 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.661 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.661 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.661 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.661 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.661 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.920 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.920 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:41.178 11:43:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:41.178 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:41.178 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:14:41.178 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:14:41.178 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.178 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:41.178 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.436 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:41.436 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.436 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:41.436 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.436 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:41.436 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.695 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:41.695 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:14:41.954 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:14:41.954 00:14:41.954 real 0m4.930s 00:14:41.954 user 0m0.908s 00:14:41.954 sys 0m0.967s 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.954 11:43:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:14:41.954 ************************************ 00:14:41.954 END TEST dm_mount 00:14:41.954 ************************************ 00:14:41.954 11:43:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:14:41.954 11:43:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:14:41.954 11:43:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:41.954 11:43:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:41.954 11:43:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:41.954 11:43:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:41.954 11:43:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:42.212 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:42.212 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:42.212 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:42.212 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:42.212 11:43:39 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:14:42.212 11:43:39 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:42.471 11:43:39 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:42.471 11:43:39 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:42.471 11:43:39 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:42.471 11:43:39 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:14:42.471 11:43:39 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:14:42.471 00:14:42.471 real 0m12.169s 00:14:42.471 user 0m3.329s 00:14:42.471 sys 0m3.220s 00:14:42.471 11:43:39 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.471 ************************************ 00:14:42.471 END TEST devices 00:14:42.471 ************************************ 00:14:42.471 11:43:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:42.471 00:14:42.471 real 0m42.130s 00:14:42.471 user 0m9.873s 00:14:42.471 sys 0m11.914s 00:14:42.471 11:43:39 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.471 11:43:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:42.471 ************************************ 00:14:42.471 END TEST setup.sh 00:14:42.471 ************************************ 00:14:42.471 11:43:39 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:43.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:43.294 Hugepages 00:14:43.294 node hugesize free / total 00:14:43.294 node0 1048576kB 0 / 0 00:14:43.294 node0 2048kB 2048 / 2048 00:14:43.294 00:14:43.294 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:43.294 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:43.552 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:14:43.552 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:43.552 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:14:43.552 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:14:43.553 11:43:40 -- spdk/autotest.sh@130 -- # uname -s 00:14:43.553 11:43:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:14:43.553 11:43:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:14:43.553 11:43:40 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:44.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:44.684 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.684 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.684 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.684 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:44.684 11:43:41 -- common/autotest_common.sh@1532 -- # sleep 1 00:14:46.080 11:43:42 -- common/autotest_common.sh@1533 -- # bdfs=() 00:14:46.080 11:43:42 -- common/autotest_common.sh@1533 -- # local bdfs 00:14:46.080 11:43:42 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:14:46.080 11:43:42 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:14:46.080 11:43:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:46.080 11:43:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:14:46.080 11:43:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:46.080 11:43:42 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:46.080 11:43:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:46.080 11:43:42 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:46.080 11:43:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:46.080 11:43:42 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:46.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:46.337 Waiting for block devices as requested 00:14:46.337 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:46.337 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:46.594 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:46.594 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:51.860 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:51.860 11:43:48 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:51.860 11:43:48 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:51.860 11:43:48 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:14:51.860 11:43:48 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:51.860 11:43:48 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:51.860 11:43:48 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:14:51.860 11:43:48 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:14:51.860 11:43:48 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:51.860 11:43:48 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:51.860 11:43:48 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:51.860 11:43:48 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1557 -- # continue 00:14:51.860 11:43:48 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:51.860 11:43:48 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:51.860 11:43:48 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:14:51.860 11:43:48 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:51.860 11:43:48 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:51.860 11:43:48 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:51.860 11:43:48 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:51.860 11:43:48 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:51.860 11:43:48 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:51.860 11:43:48 -- common/autotest_common.sh@1557 -- # continue 00:14:51.861 11:43:48 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:51.861 11:43:48 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:14:51.861 11:43:48 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:14:51.861 11:43:48 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:51.861 11:43:48 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:51.861 11:43:48 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:51.861 11:43:48 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1557 -- # continue 00:14:51.861 11:43:48 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:51.861 11:43:48 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:14:51.861 11:43:48 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:14:51.861 11:43:48 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:51.861 11:43:48 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:51.861 11:43:48 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:51.861 11:43:48 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:51.861 11:43:48 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:51.861 11:43:48 -- common/autotest_common.sh@1557 -- # continue 00:14:51.861 11:43:48 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:14:51.861 11:43:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.861 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:14:51.861 11:43:48 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:14:51.861 11:43:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:51.861 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:14:51.861 11:43:48 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:52.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:52.988 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.988 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.988 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.988 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.988 11:43:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:14:52.988 11:43:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.988 11:43:49 -- common/autotest_common.sh@10 -- # set +x 00:14:52.988 11:43:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:14:52.988 11:43:49 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:14:52.988 11:43:49 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:14:52.988 11:43:49 -- common/autotest_common.sh@1577 -- # bdfs=() 00:14:52.988 11:43:49 -- common/autotest_common.sh@1577 -- # local bdfs 00:14:52.988 11:43:49 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:14:52.988 11:43:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:52.988 11:43:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:14:52.988 11:43:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:52.988 11:43:49 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:52.988 11:43:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:53.246 11:43:50 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:53.246 11:43:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:53.246 11:43:50 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:53.246 11:43:50 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:53.246 11:43:50 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:53.246 11:43:50 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:53.246 11:43:50 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:53.246 11:43:50 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:53.246 11:43:50 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:14:53.246 11:43:50 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:53.246 11:43:50 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:53.246 11:43:50 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:14:53.246 11:43:50 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:14:53.246 11:43:50 -- common/autotest_common.sh@1593 -- # return 0 00:14:53.246 11:43:50 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:14:53.246 11:43:50 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:14:53.246 11:43:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:53.246 11:43:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:53.246 11:43:50 -- spdk/autotest.sh@162 -- # timing_enter lib 00:14:53.246 11:43:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:53.246 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:14:53.246 11:43:50 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:14:53.246 11:43:50 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:53.246 11:43:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:53.246 11:43:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.246 11:43:50 -- common/autotest_common.sh@10 -- # set +x 00:14:53.246 ************************************ 00:14:53.246 START TEST env 00:14:53.246 ************************************ 00:14:53.246 11:43:50 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:53.246 * Looking for test storage... 00:14:53.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:53.246 11:43:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:53.246 11:43:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:53.246 11:43:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.246 11:43:50 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.246 ************************************ 00:14:53.246 START TEST env_memory 00:14:53.246 ************************************ 00:14:53.246 11:43:50 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:53.246 00:14:53.246 00:14:53.246 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.246 http://cunit.sourceforge.net/ 00:14:53.246 00:14:53.246 00:14:53.246 Suite: memory 00:14:53.246 Test: alloc and free memory map ...[2024-07-25 11:43:50.275982] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:53.504 passed 00:14:53.504 Test: mem map translation ...[2024-07-25 11:43:50.325419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:53.504 [2024-07-25 11:43:50.325501] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:53.504 [2024-07-25 11:43:50.325580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:53.504 [2024-07-25 11:43:50.325608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:53.504 passed 00:14:53.504 Test: mem map registration ...[2024-07-25 11:43:50.405828] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:14:53.504 [2024-07-25 11:43:50.405910] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:14:53.504 passed 00:14:53.504 Test: mem map adjacent registrations ...passed 00:14:53.504 00:14:53.504 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.504 suites 1 1 n/a 0 0 00:14:53.504 tests 4 4 4 0 0 00:14:53.504 asserts 152 152 152 0 n/a 00:14:53.504 00:14:53.504 Elapsed time = 0.321 seconds 00:14:53.504 00:14:53.504 real 0m0.356s 00:14:53.504 user 0m0.331s 00:14:53.504 sys 0m0.021s 00:14:53.504 11:43:50 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.504 11:43:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:14:53.504 ************************************ 00:14:53.504 END TEST env_memory 00:14:53.504 ************************************ 00:14:53.762 11:43:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:53.762 11:43:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:53.762 11:43:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.762 11:43:50 env -- common/autotest_common.sh@10 -- # set +x 00:14:53.762 ************************************ 00:14:53.762 START TEST env_vtophys 00:14:53.762 ************************************ 00:14:53.762 11:43:50 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:53.762 EAL: lib.eal log level changed from notice to debug 00:14:53.762 EAL: Detected lcore 0 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 1 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 2 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 3 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 4 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 5 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 6 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 7 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 8 as core 0 on socket 0 00:14:53.762 EAL: Detected lcore 9 as core 0 on socket 0 00:14:53.762 EAL: Maximum logical cores by configuration: 128 00:14:53.762 EAL: Detected CPU lcores: 10 00:14:53.762 EAL: Detected NUMA nodes: 1 00:14:53.762 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:14:53.762 EAL: Detected shared linkage of DPDK 00:14:53.762 EAL: No shared files mode enabled, IPC will be disabled 00:14:53.762 EAL: Selected IOVA mode 'PA' 00:14:53.762 EAL: Probing VFIO support... 00:14:53.762 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:53.762 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:53.762 EAL: Ask a virtual area of 0x2e000 bytes 00:14:53.762 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:53.762 EAL: Setting up physically contiguous memory... 00:14:53.762 EAL: Setting maximum number of open files to 524288 00:14:53.762 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:53.762 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:53.762 EAL: Ask a virtual area of 0x61000 bytes 00:14:53.762 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:53.762 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:53.762 EAL: Ask a virtual area of 0x400000000 bytes 00:14:53.762 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:53.762 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:53.762 EAL: Ask a virtual area of 0x61000 bytes 00:14:53.763 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:53.763 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:53.763 EAL: Ask a virtual area of 0x400000000 bytes 00:14:53.763 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:53.763 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:53.763 EAL: Ask a virtual area of 0x61000 bytes 00:14:53.763 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:53.763 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:53.763 EAL: Ask a virtual area of 0x400000000 bytes 00:14:53.763 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:53.763 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:53.763 EAL: Ask a virtual area of 0x61000 bytes 00:14:53.763 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:53.763 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:53.763 EAL: Ask a virtual area of 0x400000000 bytes 00:14:53.763 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:53.763 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:53.763 EAL: Hugepages will be freed exactly as allocated. 00:14:53.763 EAL: No shared files mode enabled, IPC is disabled 00:14:53.763 EAL: No shared files mode enabled, IPC is disabled 00:14:53.763 EAL: TSC frequency is ~2200000 KHz 00:14:53.763 EAL: Main lcore 0 is ready (tid=7fb03290ea40;cpuset=[0]) 00:14:53.763 EAL: Trying to obtain current memory policy. 00:14:53.763 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:53.763 EAL: Restoring previous memory policy: 0 00:14:53.763 EAL: request: mp_malloc_sync 00:14:53.763 EAL: No shared files mode enabled, IPC is disabled 00:14:53.763 EAL: Heap on socket 0 was expanded by 2MB 00:14:53.763 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:53.763 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:53.763 EAL: Mem event callback 'spdk:(nil)' registered 00:14:53.763 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:53.763 00:14:53.763 00:14:53.763 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.763 http://cunit.sourceforge.net/ 00:14:53.763 00:14:53.763 00:14:53.763 Suite: components_suite 00:14:54.329 Test: vtophys_malloc_test ...passed 00:14:54.329 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:54.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.329 EAL: Restoring previous memory policy: 4 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was expanded by 4MB 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was shrunk by 4MB 00:14:54.329 EAL: Trying to obtain current memory policy. 00:14:54.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.329 EAL: Restoring previous memory policy: 4 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was expanded by 6MB 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was shrunk by 6MB 00:14:54.329 EAL: Trying to obtain current memory policy. 00:14:54.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.329 EAL: Restoring previous memory policy: 4 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was expanded by 10MB 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was shrunk by 10MB 00:14:54.329 EAL: Trying to obtain current memory policy. 00:14:54.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.329 EAL: Restoring previous memory policy: 4 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was expanded by 18MB 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was shrunk by 18MB 00:14:54.329 EAL: Trying to obtain current memory policy. 00:14:54.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.329 EAL: Restoring previous memory policy: 4 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was expanded by 34MB 00:14:54.329 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.329 EAL: request: mp_malloc_sync 00:14:54.329 EAL: No shared files mode enabled, IPC is disabled 00:14:54.329 EAL: Heap on socket 0 was shrunk by 34MB 00:14:54.587 EAL: Trying to obtain current memory policy. 00:14:54.587 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.587 EAL: Restoring previous memory policy: 4 00:14:54.587 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.587 EAL: request: mp_malloc_sync 00:14:54.587 EAL: No shared files mode enabled, IPC is disabled 00:14:54.587 EAL: Heap on socket 0 was expanded by 66MB 00:14:54.587 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.587 EAL: request: mp_malloc_sync 00:14:54.587 EAL: No shared files mode enabled, IPC is disabled 00:14:54.587 EAL: Heap on socket 0 was shrunk by 66MB 00:14:54.587 EAL: Trying to obtain current memory policy. 00:14:54.587 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:54.587 EAL: Restoring previous memory policy: 4 00:14:54.587 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.587 EAL: request: mp_malloc_sync 00:14:54.587 EAL: No shared files mode enabled, IPC is disabled 00:14:54.587 EAL: Heap on socket 0 was expanded by 130MB 00:14:54.845 EAL: Calling mem event callback 'spdk:(nil)' 00:14:54.845 EAL: request: mp_malloc_sync 00:14:54.845 EAL: No shared files mode enabled, IPC is disabled 00:14:54.845 EAL: Heap on socket 0 was shrunk by 130MB 00:14:55.102 EAL: Trying to obtain current memory policy. 00:14:55.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:55.102 EAL: Restoring previous memory policy: 4 00:14:55.102 EAL: Calling mem event callback 'spdk:(nil)' 00:14:55.102 EAL: request: mp_malloc_sync 00:14:55.102 EAL: No shared files mode enabled, IPC is disabled 00:14:55.102 EAL: Heap on socket 0 was expanded by 258MB 00:14:55.667 EAL: Calling mem event callback 'spdk:(nil)' 00:14:55.667 EAL: request: mp_malloc_sync 00:14:55.667 EAL: No shared files mode enabled, IPC is disabled 00:14:55.667 EAL: Heap on socket 0 was shrunk by 258MB 00:14:55.926 EAL: Trying to obtain current memory policy. 00:14:55.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:56.184 EAL: Restoring previous memory policy: 4 00:14:56.184 EAL: Calling mem event callback 'spdk:(nil)' 00:14:56.184 EAL: request: mp_malloc_sync 00:14:56.184 EAL: No shared files mode enabled, IPC is disabled 00:14:56.184 EAL: Heap on socket 0 was expanded by 514MB 00:14:57.121 EAL: Calling mem event callback 'spdk:(nil)' 00:14:57.121 EAL: request: mp_malloc_sync 00:14:57.121 EAL: No shared files mode enabled, IPC is disabled 00:14:57.121 EAL: Heap on socket 0 was shrunk by 514MB 00:14:58.053 EAL: Trying to obtain current memory policy. 00:14:58.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:58.053 EAL: Restoring previous memory policy: 4 00:14:58.053 EAL: Calling mem event callback 'spdk:(nil)' 00:14:58.053 EAL: request: mp_malloc_sync 00:14:58.053 EAL: No shared files mode enabled, IPC is disabled 00:14:58.053 EAL: Heap on socket 0 was expanded by 1026MB 00:14:59.947 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.947 EAL: request: mp_malloc_sync 00:14:59.947 EAL: No shared files mode enabled, IPC is disabled 00:14:59.947 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:01.845 passed 00:15:01.845 00:15:01.845 Run Summary: Type Total Ran Passed Failed Inactive 00:15:01.845 suites 1 1 n/a 0 0 00:15:01.845 tests 2 2 2 0 0 00:15:01.845 asserts 5411 5411 5411 0 n/a 00:15:01.845 00:15:01.845 Elapsed time = 7.508 seconds 00:15:01.845 EAL: Calling mem event callback 'spdk:(nil)' 00:15:01.845 EAL: request: mp_malloc_sync 00:15:01.845 EAL: No shared files mode enabled, IPC is disabled 00:15:01.845 EAL: Heap on socket 0 was shrunk by 2MB 00:15:01.845 EAL: No shared files mode enabled, IPC is disabled 00:15:01.845 EAL: No shared files mode enabled, IPC is disabled 00:15:01.845 EAL: No shared files mode enabled, IPC is disabled 00:15:01.845 00:15:01.845 real 0m7.844s 00:15:01.845 user 0m6.950s 00:15:01.845 sys 0m0.697s 00:15:01.845 11:43:58 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.845 11:43:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:01.845 ************************************ 00:15:01.845 END TEST env_vtophys 00:15:01.845 ************************************ 00:15:01.845 11:43:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:01.845 11:43:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:01.845 11:43:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.845 11:43:58 env -- common/autotest_common.sh@10 -- # set +x 00:15:01.845 ************************************ 00:15:01.845 START TEST env_pci 00:15:01.845 ************************************ 00:15:01.845 11:43:58 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:01.845 00:15:01.845 00:15:01.845 CUnit - A unit testing framework for C - Version 2.1-3 00:15:01.845 http://cunit.sourceforge.net/ 00:15:01.845 00:15:01.845 00:15:01.845 Suite: pci 00:15:01.845 Test: pci_hook ...[2024-07-25 11:43:58.504376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62103 has claimed it 00:15:01.845 passed 00:15:01.845 00:15:01.845 Run Summary: Type Total Ran Passed Failed Inactive 00:15:01.845 suites 1 1 n/a 0 0 00:15:01.845 tests 1 1 1 0 0 00:15:01.845 asserts 25 25 25 0 n/a 00:15:01.845 00:15:01.845 Elapsed time = 0.008 seconds 00:15:01.845 EAL: Cannot find device (10000:00:01.0) 00:15:01.845 EAL: Failed to attach device on primary process 00:15:01.845 00:15:01.845 real 0m0.073s 00:15:01.845 user 0m0.030s 00:15:01.845 sys 0m0.042s 00:15:01.845 11:43:58 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.845 ************************************ 00:15:01.845 END TEST env_pci 00:15:01.845 ************************************ 00:15:01.845 11:43:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:01.845 11:43:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:01.845 11:43:58 env -- env/env.sh@15 -- # uname 00:15:01.845 11:43:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:01.845 11:43:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:01.845 11:43:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:01.845 11:43:58 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:01.845 11:43:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.845 11:43:58 env -- common/autotest_common.sh@10 -- # set +x 00:15:01.845 ************************************ 00:15:01.845 START TEST env_dpdk_post_init 00:15:01.845 ************************************ 00:15:01.845 11:43:58 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:01.845 EAL: Detected CPU lcores: 10 00:15:01.845 EAL: Detected NUMA nodes: 1 00:15:01.845 EAL: Detected shared linkage of DPDK 00:15:01.845 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:01.845 EAL: Selected IOVA mode 'PA' 00:15:01.845 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:01.845 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:01.845 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:01.845 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:15:01.845 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:15:02.103 Starting DPDK initialization... 00:15:02.103 Starting SPDK post initialization... 00:15:02.103 SPDK NVMe probe 00:15:02.103 Attaching to 0000:00:10.0 00:15:02.103 Attaching to 0000:00:11.0 00:15:02.103 Attaching to 0000:00:12.0 00:15:02.103 Attaching to 0000:00:13.0 00:15:02.103 Attached to 0000:00:10.0 00:15:02.103 Attached to 0000:00:11.0 00:15:02.103 Attached to 0000:00:13.0 00:15:02.103 Attached to 0000:00:12.0 00:15:02.103 Cleaning up... 00:15:02.103 00:15:02.103 real 0m0.350s 00:15:02.103 user 0m0.139s 00:15:02.103 sys 0m0.111s 00:15:02.103 11:43:58 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.103 11:43:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:02.103 ************************************ 00:15:02.103 END TEST env_dpdk_post_init 00:15:02.103 ************************************ 00:15:02.103 11:43:58 env -- env/env.sh@26 -- # uname 00:15:02.103 11:43:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:02.103 11:43:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:02.103 11:43:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:02.103 11:43:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.103 11:43:58 env -- common/autotest_common.sh@10 -- # set +x 00:15:02.103 ************************************ 00:15:02.103 START TEST env_mem_callbacks 00:15:02.103 ************************************ 00:15:02.103 11:43:58 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:02.103 EAL: Detected CPU lcores: 10 00:15:02.103 EAL: Detected NUMA nodes: 1 00:15:02.103 EAL: Detected shared linkage of DPDK 00:15:02.103 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:02.103 EAL: Selected IOVA mode 'PA' 00:15:02.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:02.361 00:15:02.361 00:15:02.361 CUnit - A unit testing framework for C - Version 2.1-3 00:15:02.361 http://cunit.sourceforge.net/ 00:15:02.361 00:15:02.361 00:15:02.361 Suite: memory 00:15:02.361 Test: test ... 00:15:02.361 register 0x200000200000 2097152 00:15:02.361 malloc 3145728 00:15:02.362 register 0x200000400000 4194304 00:15:02.362 buf 0x2000004fffc0 len 3145728 PASSED 00:15:02.362 malloc 64 00:15:02.362 buf 0x2000004ffec0 len 64 PASSED 00:15:02.362 malloc 4194304 00:15:02.362 register 0x200000800000 6291456 00:15:02.362 buf 0x2000009fffc0 len 4194304 PASSED 00:15:02.362 free 0x2000004fffc0 3145728 00:15:02.362 free 0x2000004ffec0 64 00:15:02.362 unregister 0x200000400000 4194304 PASSED 00:15:02.362 free 0x2000009fffc0 4194304 00:15:02.362 unregister 0x200000800000 6291456 PASSED 00:15:02.362 malloc 8388608 00:15:02.362 register 0x200000400000 10485760 00:15:02.362 buf 0x2000005fffc0 len 8388608 PASSED 00:15:02.362 free 0x2000005fffc0 8388608 00:15:02.362 unregister 0x200000400000 10485760 PASSED 00:15:02.362 passed 00:15:02.362 00:15:02.362 Run Summary: Type Total Ran Passed Failed Inactive 00:15:02.362 suites 1 1 n/a 0 0 00:15:02.362 tests 1 1 1 0 0 00:15:02.362 asserts 15 15 15 0 n/a 00:15:02.362 00:15:02.362 Elapsed time = 0.063 seconds 00:15:02.362 ************************************ 00:15:02.362 END TEST env_mem_callbacks 00:15:02.362 ************************************ 00:15:02.362 00:15:02.362 real 0m0.284s 00:15:02.362 user 0m0.111s 00:15:02.362 sys 0m0.069s 00:15:02.362 11:43:59 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.362 11:43:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:15:02.362 ************************************ 00:15:02.362 END TEST env 00:15:02.362 ************************************ 00:15:02.362 00:15:02.362 real 0m9.202s 00:15:02.362 user 0m7.666s 00:15:02.362 sys 0m1.109s 00:15:02.362 11:43:59 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.362 11:43:59 env -- common/autotest_common.sh@10 -- # set +x 00:15:02.362 11:43:59 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:02.362 11:43:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:02.362 11:43:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.362 11:43:59 -- common/autotest_common.sh@10 -- # set +x 00:15:02.362 ************************************ 00:15:02.362 START TEST rpc 00:15:02.362 ************************************ 00:15:02.362 11:43:59 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:02.362 * Looking for test storage... 00:15:02.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:02.620 11:43:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62222 00:15:02.620 11:43:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:02.620 11:43:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:02.620 11:43:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62222 00:15:02.620 11:43:59 rpc -- common/autotest_common.sh@831 -- # '[' -z 62222 ']' 00:15:02.620 11:43:59 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.620 11:43:59 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.620 11:43:59 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.620 11:43:59 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.620 11:43:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.620 [2024-07-25 11:43:59.528262] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:02.620 [2024-07-25 11:43:59.528688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62222 ] 00:15:02.878 [2024-07-25 11:43:59.700910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.878 [2024-07-25 11:43:59.896376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:02.878 [2024-07-25 11:43:59.896443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62222' to capture a snapshot of events at runtime. 00:15:02.878 [2024-07-25 11:43:59.896464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.878 [2024-07-25 11:43:59.896478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.878 [2024-07-25 11:43:59.896492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62222 for offline analysis/debug. 00:15:02.878 [2024-07-25 11:43:59.896532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.812 11:44:00 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.812 11:44:00 rpc -- common/autotest_common.sh@864 -- # return 0 00:15:03.812 11:44:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:03.812 11:44:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:03.812 11:44:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:03.812 11:44:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:03.812 11:44:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:03.812 11:44:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.812 11:44:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.812 ************************************ 00:15:03.812 START TEST rpc_integrity 00:15:03.812 ************************************ 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:03.812 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.812 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:03.812 { 00:15:03.812 "name": "Malloc0", 00:15:03.812 "aliases": [ 00:15:03.812 "a91bac70-0a80-459a-9bf9-2e3fed5fb9a2" 00:15:03.812 ], 00:15:03.812 "product_name": "Malloc disk", 00:15:03.812 "block_size": 512, 00:15:03.812 "num_blocks": 16384, 00:15:03.812 "uuid": "a91bac70-0a80-459a-9bf9-2e3fed5fb9a2", 00:15:03.812 "assigned_rate_limits": { 00:15:03.812 "rw_ios_per_sec": 0, 00:15:03.812 "rw_mbytes_per_sec": 0, 00:15:03.812 "r_mbytes_per_sec": 0, 00:15:03.812 "w_mbytes_per_sec": 0 00:15:03.812 }, 00:15:03.812 "claimed": false, 00:15:03.812 "zoned": false, 00:15:03.812 "supported_io_types": { 00:15:03.812 "read": true, 00:15:03.812 "write": true, 00:15:03.812 "unmap": true, 00:15:03.812 "flush": true, 00:15:03.812 "reset": true, 00:15:03.812 "nvme_admin": false, 00:15:03.812 "nvme_io": false, 00:15:03.812 "nvme_io_md": false, 00:15:03.812 "write_zeroes": true, 00:15:03.812 "zcopy": true, 00:15:03.812 "get_zone_info": false, 00:15:03.812 "zone_management": false, 00:15:03.812 "zone_append": false, 00:15:03.812 "compare": false, 00:15:03.812 "compare_and_write": false, 00:15:03.812 "abort": true, 00:15:03.812 "seek_hole": false, 00:15:03.812 "seek_data": false, 00:15:03.812 "copy": true, 00:15:03.812 "nvme_iov_md": false 00:15:03.813 }, 00:15:03.813 "memory_domains": [ 00:15:03.813 { 00:15:03.813 "dma_device_id": "system", 00:15:03.813 "dma_device_type": 1 00:15:03.813 }, 00:15:03.813 { 00:15:03.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.813 "dma_device_type": 2 00:15:03.813 } 00:15:03.813 ], 00:15:03.813 "driver_specific": {} 00:15:03.813 } 00:15:03.813 ]' 00:15:03.813 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:04.071 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:04.071 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:04.071 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.071 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.071 [2024-07-25 11:44:00.858491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:04.071 [2024-07-25 11:44:00.858581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.071 [2024-07-25 11:44:00.858628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:04.071 [2024-07-25 11:44:00.858647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.071 [2024-07-25 11:44:00.861301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.071 [2024-07-25 11:44:00.861347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:04.071 Passthru0 00:15:04.071 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.071 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:04.071 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.071 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.071 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.071 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:04.071 { 00:15:04.071 "name": "Malloc0", 00:15:04.071 "aliases": [ 00:15:04.071 "a91bac70-0a80-459a-9bf9-2e3fed5fb9a2" 00:15:04.071 ], 00:15:04.071 "product_name": "Malloc disk", 00:15:04.071 "block_size": 512, 00:15:04.071 "num_blocks": 16384, 00:15:04.071 "uuid": "a91bac70-0a80-459a-9bf9-2e3fed5fb9a2", 00:15:04.071 "assigned_rate_limits": { 00:15:04.071 "rw_ios_per_sec": 0, 00:15:04.071 "rw_mbytes_per_sec": 0, 00:15:04.071 "r_mbytes_per_sec": 0, 00:15:04.071 "w_mbytes_per_sec": 0 00:15:04.071 }, 00:15:04.071 "claimed": true, 00:15:04.071 "claim_type": "exclusive_write", 00:15:04.071 "zoned": false, 00:15:04.071 "supported_io_types": { 00:15:04.071 "read": true, 00:15:04.071 "write": true, 00:15:04.071 "unmap": true, 00:15:04.071 "flush": true, 00:15:04.071 "reset": true, 00:15:04.071 "nvme_admin": false, 00:15:04.071 "nvme_io": false, 00:15:04.071 "nvme_io_md": false, 00:15:04.071 "write_zeroes": true, 00:15:04.071 "zcopy": true, 00:15:04.071 "get_zone_info": false, 00:15:04.071 "zone_management": false, 00:15:04.071 "zone_append": false, 00:15:04.071 "compare": false, 00:15:04.071 "compare_and_write": false, 00:15:04.071 "abort": true, 00:15:04.071 "seek_hole": false, 00:15:04.071 "seek_data": false, 00:15:04.071 "copy": true, 00:15:04.071 "nvme_iov_md": false 00:15:04.071 }, 00:15:04.071 "memory_domains": [ 00:15:04.071 { 00:15:04.071 "dma_device_id": "system", 00:15:04.071 "dma_device_type": 1 00:15:04.071 }, 00:15:04.071 { 00:15:04.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.071 "dma_device_type": 2 00:15:04.071 } 00:15:04.071 ], 00:15:04.071 "driver_specific": {} 00:15:04.071 }, 00:15:04.071 { 00:15:04.071 "name": "Passthru0", 00:15:04.071 "aliases": [ 00:15:04.071 "10849b6c-de80-500c-be64-c7e9b3e0338f" 00:15:04.071 ], 00:15:04.071 "product_name": "passthru", 00:15:04.071 "block_size": 512, 00:15:04.071 "num_blocks": 16384, 00:15:04.071 "uuid": "10849b6c-de80-500c-be64-c7e9b3e0338f", 00:15:04.071 "assigned_rate_limits": { 00:15:04.071 "rw_ios_per_sec": 0, 00:15:04.071 "rw_mbytes_per_sec": 0, 00:15:04.071 "r_mbytes_per_sec": 0, 00:15:04.071 "w_mbytes_per_sec": 0 00:15:04.071 }, 00:15:04.071 "claimed": false, 00:15:04.071 "zoned": false, 00:15:04.071 "supported_io_types": { 00:15:04.071 "read": true, 00:15:04.071 "write": true, 00:15:04.071 "unmap": true, 00:15:04.071 "flush": true, 00:15:04.071 "reset": true, 00:15:04.071 "nvme_admin": false, 00:15:04.071 "nvme_io": false, 00:15:04.071 "nvme_io_md": false, 00:15:04.071 "write_zeroes": true, 00:15:04.071 "zcopy": true, 00:15:04.071 "get_zone_info": false, 00:15:04.071 "zone_management": false, 00:15:04.071 "zone_append": false, 00:15:04.071 "compare": false, 00:15:04.071 "compare_and_write": false, 00:15:04.071 "abort": true, 00:15:04.071 "seek_hole": false, 00:15:04.072 "seek_data": false, 00:15:04.072 "copy": true, 00:15:04.072 "nvme_iov_md": false 00:15:04.072 }, 00:15:04.072 "memory_domains": [ 00:15:04.072 { 00:15:04.072 "dma_device_id": "system", 00:15:04.072 "dma_device_type": 1 00:15:04.072 }, 00:15:04.072 { 00:15:04.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.072 "dma_device_type": 2 00:15:04.072 } 00:15:04.072 ], 00:15:04.072 "driver_specific": { 00:15:04.072 "passthru": { 00:15:04.072 "name": "Passthru0", 00:15:04.072 "base_bdev_name": "Malloc0" 00:15:04.072 } 00:15:04.072 } 00:15:04.072 } 00:15:04.072 ]' 00:15:04.072 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:04.072 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:04.072 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.072 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.072 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.072 11:44:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.072 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:04.072 11:44:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:04.072 ************************************ 00:15:04.072 END TEST rpc_integrity 00:15:04.072 ************************************ 00:15:04.072 11:44:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:04.072 00:15:04.072 real 0m0.336s 00:15:04.072 user 0m0.207s 00:15:04.072 sys 0m0.039s 00:15:04.072 11:44:01 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.072 11:44:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.072 11:44:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:04.072 11:44:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:04.072 11:44:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.072 11:44:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.072 ************************************ 00:15:04.072 START TEST rpc_plugins 00:15:04.072 ************************************ 00:15:04.072 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:15:04.072 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:04.072 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.072 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.072 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.072 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:04.072 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:04.072 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.072 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:04.330 { 00:15:04.330 "name": "Malloc1", 00:15:04.330 "aliases": [ 00:15:04.330 "51952119-6470-4ac0-8c77-0e747b426159" 00:15:04.330 ], 00:15:04.330 "product_name": "Malloc disk", 00:15:04.330 "block_size": 4096, 00:15:04.330 "num_blocks": 256, 00:15:04.330 "uuid": "51952119-6470-4ac0-8c77-0e747b426159", 00:15:04.330 "assigned_rate_limits": { 00:15:04.330 "rw_ios_per_sec": 0, 00:15:04.330 "rw_mbytes_per_sec": 0, 00:15:04.330 "r_mbytes_per_sec": 0, 00:15:04.330 "w_mbytes_per_sec": 0 00:15:04.330 }, 00:15:04.330 "claimed": false, 00:15:04.330 "zoned": false, 00:15:04.330 "supported_io_types": { 00:15:04.330 "read": true, 00:15:04.330 "write": true, 00:15:04.330 "unmap": true, 00:15:04.330 "flush": true, 00:15:04.330 "reset": true, 00:15:04.330 "nvme_admin": false, 00:15:04.330 "nvme_io": false, 00:15:04.330 "nvme_io_md": false, 00:15:04.330 "write_zeroes": true, 00:15:04.330 "zcopy": true, 00:15:04.330 "get_zone_info": false, 00:15:04.330 "zone_management": false, 00:15:04.330 "zone_append": false, 00:15:04.330 "compare": false, 00:15:04.330 "compare_and_write": false, 00:15:04.330 "abort": true, 00:15:04.330 "seek_hole": false, 00:15:04.330 "seek_data": false, 00:15:04.330 "copy": true, 00:15:04.330 "nvme_iov_md": false 00:15:04.330 }, 00:15:04.330 "memory_domains": [ 00:15:04.330 { 00:15:04.330 "dma_device_id": "system", 00:15:04.330 "dma_device_type": 1 00:15:04.330 }, 00:15:04.330 { 00:15:04.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.330 "dma_device_type": 2 00:15:04.330 } 00:15:04.330 ], 00:15:04.330 "driver_specific": {} 00:15:04.330 } 00:15:04.330 ]' 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:04.330 11:44:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:04.330 00:15:04.330 real 0m0.162s 00:15:04.330 user 0m0.114s 00:15:04.330 sys 0m0.012s 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.330 11:44:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 ************************************ 00:15:04.330 END TEST rpc_plugins 00:15:04.330 ************************************ 00:15:04.330 11:44:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:04.330 11:44:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:04.330 11:44:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.330 11:44:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 ************************************ 00:15:04.330 START TEST rpc_trace_cmd_test 00:15:04.330 ************************************ 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:04.330 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62222", 00:15:04.330 "tpoint_group_mask": "0x8", 00:15:04.330 "iscsi_conn": { 00:15:04.330 "mask": "0x2", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "scsi": { 00:15:04.330 "mask": "0x4", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "bdev": { 00:15:04.330 "mask": "0x8", 00:15:04.330 "tpoint_mask": "0xffffffffffffffff" 00:15:04.330 }, 00:15:04.330 "nvmf_rdma": { 00:15:04.330 "mask": "0x10", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "nvmf_tcp": { 00:15:04.330 "mask": "0x20", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "ftl": { 00:15:04.330 "mask": "0x40", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "blobfs": { 00:15:04.330 "mask": "0x80", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "dsa": { 00:15:04.330 "mask": "0x200", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "thread": { 00:15:04.330 "mask": "0x400", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "nvme_pcie": { 00:15:04.330 "mask": "0x800", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "iaa": { 00:15:04.330 "mask": "0x1000", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "nvme_tcp": { 00:15:04.330 "mask": "0x2000", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "bdev_nvme": { 00:15:04.330 "mask": "0x4000", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 }, 00:15:04.330 "sock": { 00:15:04.330 "mask": "0x8000", 00:15:04.330 "tpoint_mask": "0x0" 00:15:04.330 } 00:15:04.330 }' 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:15:04.330 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:04.588 ************************************ 00:15:04.588 END TEST rpc_trace_cmd_test 00:15:04.588 ************************************ 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:04.588 00:15:04.588 real 0m0.255s 00:15:04.588 user 0m0.222s 00:15:04.588 sys 0m0.026s 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.588 11:44:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.588 11:44:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:04.588 11:44:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:04.588 11:44:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:04.588 11:44:01 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:04.588 11:44:01 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.588 11:44:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.588 ************************************ 00:15:04.588 START TEST rpc_daemon_integrity 00:15:04.588 ************************************ 00:15:04.588 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:15:04.588 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.588 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.588 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.588 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.588 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:04.588 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:04.847 { 00:15:04.847 "name": "Malloc2", 00:15:04.847 "aliases": [ 00:15:04.847 "60dfd828-b87b-46f7-ab79-2003c9df2a91" 00:15:04.847 ], 00:15:04.847 "product_name": "Malloc disk", 00:15:04.847 "block_size": 512, 00:15:04.847 "num_blocks": 16384, 00:15:04.847 "uuid": "60dfd828-b87b-46f7-ab79-2003c9df2a91", 00:15:04.847 "assigned_rate_limits": { 00:15:04.847 "rw_ios_per_sec": 0, 00:15:04.847 "rw_mbytes_per_sec": 0, 00:15:04.847 "r_mbytes_per_sec": 0, 00:15:04.847 "w_mbytes_per_sec": 0 00:15:04.847 }, 00:15:04.847 "claimed": false, 00:15:04.847 "zoned": false, 00:15:04.847 "supported_io_types": { 00:15:04.847 "read": true, 00:15:04.847 "write": true, 00:15:04.847 "unmap": true, 00:15:04.847 "flush": true, 00:15:04.847 "reset": true, 00:15:04.847 "nvme_admin": false, 00:15:04.847 "nvme_io": false, 00:15:04.847 "nvme_io_md": false, 00:15:04.847 "write_zeroes": true, 00:15:04.847 "zcopy": true, 00:15:04.847 "get_zone_info": false, 00:15:04.847 "zone_management": false, 00:15:04.847 "zone_append": false, 00:15:04.847 "compare": false, 00:15:04.847 "compare_and_write": false, 00:15:04.847 "abort": true, 00:15:04.847 "seek_hole": false, 00:15:04.847 "seek_data": false, 00:15:04.847 "copy": true, 00:15:04.847 "nvme_iov_md": false 00:15:04.847 }, 00:15:04.847 "memory_domains": [ 00:15:04.847 { 00:15:04.847 "dma_device_id": "system", 00:15:04.847 "dma_device_type": 1 00:15:04.847 }, 00:15:04.847 { 00:15:04.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.847 "dma_device_type": 2 00:15:04.847 } 00:15:04.847 ], 00:15:04.847 "driver_specific": {} 00:15:04.847 } 00:15:04.847 ]' 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.847 [2024-07-25 11:44:01.756876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:04.847 [2024-07-25 11:44:01.756962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.847 [2024-07-25 11:44:01.757003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:04.847 [2024-07-25 11:44:01.757020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.847 [2024-07-25 11:44:01.759805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.847 [2024-07-25 11:44:01.759860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:04.847 Passthru0 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:04.847 { 00:15:04.847 "name": "Malloc2", 00:15:04.847 "aliases": [ 00:15:04.847 "60dfd828-b87b-46f7-ab79-2003c9df2a91" 00:15:04.847 ], 00:15:04.847 "product_name": "Malloc disk", 00:15:04.847 "block_size": 512, 00:15:04.847 "num_blocks": 16384, 00:15:04.847 "uuid": "60dfd828-b87b-46f7-ab79-2003c9df2a91", 00:15:04.847 "assigned_rate_limits": { 00:15:04.847 "rw_ios_per_sec": 0, 00:15:04.847 "rw_mbytes_per_sec": 0, 00:15:04.847 "r_mbytes_per_sec": 0, 00:15:04.847 "w_mbytes_per_sec": 0 00:15:04.847 }, 00:15:04.847 "claimed": true, 00:15:04.847 "claim_type": "exclusive_write", 00:15:04.847 "zoned": false, 00:15:04.847 "supported_io_types": { 00:15:04.847 "read": true, 00:15:04.847 "write": true, 00:15:04.847 "unmap": true, 00:15:04.847 "flush": true, 00:15:04.847 "reset": true, 00:15:04.847 "nvme_admin": false, 00:15:04.847 "nvme_io": false, 00:15:04.847 "nvme_io_md": false, 00:15:04.847 "write_zeroes": true, 00:15:04.847 "zcopy": true, 00:15:04.847 "get_zone_info": false, 00:15:04.847 "zone_management": false, 00:15:04.847 "zone_append": false, 00:15:04.847 "compare": false, 00:15:04.847 "compare_and_write": false, 00:15:04.847 "abort": true, 00:15:04.847 "seek_hole": false, 00:15:04.847 "seek_data": false, 00:15:04.847 "copy": true, 00:15:04.847 "nvme_iov_md": false 00:15:04.847 }, 00:15:04.847 "memory_domains": [ 00:15:04.847 { 00:15:04.847 "dma_device_id": "system", 00:15:04.847 "dma_device_type": 1 00:15:04.847 }, 00:15:04.847 { 00:15:04.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.847 "dma_device_type": 2 00:15:04.847 } 00:15:04.847 ], 00:15:04.847 "driver_specific": {} 00:15:04.847 }, 00:15:04.847 { 00:15:04.847 "name": "Passthru0", 00:15:04.847 "aliases": [ 00:15:04.847 "02de9b11-7b3c-5e61-85cf-f092a8089204" 00:15:04.847 ], 00:15:04.847 "product_name": "passthru", 00:15:04.847 "block_size": 512, 00:15:04.847 "num_blocks": 16384, 00:15:04.847 "uuid": "02de9b11-7b3c-5e61-85cf-f092a8089204", 00:15:04.847 "assigned_rate_limits": { 00:15:04.847 "rw_ios_per_sec": 0, 00:15:04.847 "rw_mbytes_per_sec": 0, 00:15:04.847 "r_mbytes_per_sec": 0, 00:15:04.847 "w_mbytes_per_sec": 0 00:15:04.847 }, 00:15:04.847 "claimed": false, 00:15:04.847 "zoned": false, 00:15:04.847 "supported_io_types": { 00:15:04.847 "read": true, 00:15:04.847 "write": true, 00:15:04.847 "unmap": true, 00:15:04.847 "flush": true, 00:15:04.847 "reset": true, 00:15:04.847 "nvme_admin": false, 00:15:04.847 "nvme_io": false, 00:15:04.847 "nvme_io_md": false, 00:15:04.847 "write_zeroes": true, 00:15:04.847 "zcopy": true, 00:15:04.847 "get_zone_info": false, 00:15:04.847 "zone_management": false, 00:15:04.847 "zone_append": false, 00:15:04.847 "compare": false, 00:15:04.847 "compare_and_write": false, 00:15:04.847 "abort": true, 00:15:04.847 "seek_hole": false, 00:15:04.847 "seek_data": false, 00:15:04.847 "copy": true, 00:15:04.847 "nvme_iov_md": false 00:15:04.847 }, 00:15:04.847 "memory_domains": [ 00:15:04.847 { 00:15:04.847 "dma_device_id": "system", 00:15:04.847 "dma_device_type": 1 00:15:04.847 }, 00:15:04.847 { 00:15:04.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.847 "dma_device_type": 2 00:15:04.847 } 00:15:04.847 ], 00:15:04.847 "driver_specific": { 00:15:04.847 "passthru": { 00:15:04.847 "name": "Passthru0", 00:15:04.847 "base_bdev_name": "Malloc2" 00:15:04.847 } 00:15:04.847 } 00:15:04.847 } 00:15:04.847 ]' 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.847 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:04.848 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.848 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:04.848 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.848 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.105 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.105 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:05.105 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:05.105 ************************************ 00:15:05.105 END TEST rpc_daemon_integrity 00:15:05.105 ************************************ 00:15:05.105 11:44:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:05.105 00:15:05.105 real 0m0.356s 00:15:05.105 user 0m0.229s 00:15:05.105 sys 0m0.034s 00:15:05.105 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.105 11:44:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:05.105 11:44:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:05.105 11:44:01 rpc -- rpc/rpc.sh@84 -- # killprocess 62222 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@950 -- # '[' -z 62222 ']' 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@954 -- # kill -0 62222 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@955 -- # uname 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62222 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:05.105 killing process with pid 62222 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62222' 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@969 -- # kill 62222 00:15:05.105 11:44:01 rpc -- common/autotest_common.sh@974 -- # wait 62222 00:15:07.660 00:15:07.660 real 0m4.831s 00:15:07.660 user 0m5.631s 00:15:07.660 sys 0m0.665s 00:15:07.660 11:44:04 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.660 11:44:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.660 ************************************ 00:15:07.660 END TEST rpc 00:15:07.660 ************************************ 00:15:07.660 11:44:04 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:07.660 11:44:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:07.660 11:44:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.660 11:44:04 -- common/autotest_common.sh@10 -- # set +x 00:15:07.660 ************************************ 00:15:07.660 START TEST skip_rpc 00:15:07.660 ************************************ 00:15:07.660 11:44:04 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:07.660 * Looking for test storage... 00:15:07.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:07.660 11:44:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:07.660 11:44:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:07.660 11:44:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:07.660 11:44:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:07.660 11:44:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.660 11:44:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.660 ************************************ 00:15:07.660 START TEST skip_rpc 00:15:07.660 ************************************ 00:15:07.660 11:44:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:15:07.660 11:44:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62438 00:15:07.660 11:44:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:07.660 11:44:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:07.660 11:44:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:07.660 [2024-07-25 11:44:04.391143] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:07.660 [2024-07-25 11:44:04.391293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62438 ] 00:15:07.660 [2024-07-25 11:44:04.575566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.917 [2024-07-25 11:44:04.810090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62438 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62438 ']' 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62438 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62438 00:15:13.178 killing process with pid 62438 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62438' 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62438 00:15:13.178 11:44:09 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62438 00:15:14.552 ************************************ 00:15:14.552 END TEST skip_rpc 00:15:14.552 ************************************ 00:15:14.552 00:15:14.552 real 0m7.221s 00:15:14.552 user 0m6.796s 00:15:14.552 sys 0m0.302s 00:15:14.552 11:44:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.552 11:44:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.552 11:44:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:14.552 11:44:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:14.552 11:44:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.552 11:44:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.552 ************************************ 00:15:14.552 START TEST skip_rpc_with_json 00:15:14.552 ************************************ 00:15:14.552 11:44:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:15:14.552 11:44:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:14.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62542 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62542 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62542 ']' 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.553 11:44:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:14.810 [2024-07-25 11:44:11.680938] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:14.810 [2024-07-25 11:44:11.681147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:15:15.069 [2024-07-25 11:44:11.875218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.326 [2024-07-25 11:44:12.126215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:15.891 [2024-07-25 11:44:12.825039] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:15.891 request: 00:15:15.891 { 00:15:15.891 "trtype": "tcp", 00:15:15.891 "method": "nvmf_get_transports", 00:15:15.891 "req_id": 1 00:15:15.891 } 00:15:15.891 Got JSON-RPC error response 00:15:15.891 response: 00:15:15.891 { 00:15:15.891 "code": -19, 00:15:15.891 "message": "No such device" 00:15:15.891 } 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:15.891 [2024-07-25 11:44:12.833150] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.891 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:16.149 11:44:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.149 11:44:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:16.149 { 00:15:16.149 "subsystems": [ 00:15:16.149 { 00:15:16.149 "subsystem": "keyring", 00:15:16.149 "config": [] 00:15:16.149 }, 00:15:16.149 { 00:15:16.149 "subsystem": "iobuf", 00:15:16.149 "config": [ 00:15:16.149 { 00:15:16.149 "method": "iobuf_set_options", 00:15:16.149 "params": { 00:15:16.149 "small_pool_count": 8192, 00:15:16.149 "large_pool_count": 1024, 00:15:16.149 "small_bufsize": 8192, 00:15:16.149 "large_bufsize": 135168 00:15:16.149 } 00:15:16.149 } 00:15:16.149 ] 00:15:16.149 }, 00:15:16.149 { 00:15:16.149 "subsystem": "sock", 00:15:16.149 "config": [ 00:15:16.150 { 00:15:16.150 "method": "sock_set_default_impl", 00:15:16.150 "params": { 00:15:16.150 "impl_name": "posix" 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "sock_impl_set_options", 00:15:16.150 "params": { 00:15:16.150 "impl_name": "ssl", 00:15:16.150 "recv_buf_size": 4096, 00:15:16.150 "send_buf_size": 4096, 00:15:16.150 "enable_recv_pipe": true, 00:15:16.150 "enable_quickack": false, 00:15:16.150 "enable_placement_id": 0, 00:15:16.150 "enable_zerocopy_send_server": true, 00:15:16.150 "enable_zerocopy_send_client": false, 00:15:16.150 "zerocopy_threshold": 0, 00:15:16.150 "tls_version": 0, 00:15:16.150 "enable_ktls": false 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "sock_impl_set_options", 00:15:16.150 "params": { 00:15:16.150 "impl_name": "posix", 00:15:16.150 "recv_buf_size": 2097152, 00:15:16.150 "send_buf_size": 2097152, 00:15:16.150 "enable_recv_pipe": true, 00:15:16.150 "enable_quickack": false, 00:15:16.150 "enable_placement_id": 0, 00:15:16.150 "enable_zerocopy_send_server": true, 00:15:16.150 "enable_zerocopy_send_client": false, 00:15:16.150 "zerocopy_threshold": 0, 00:15:16.150 "tls_version": 0, 00:15:16.150 "enable_ktls": false 00:15:16.150 } 00:15:16.150 } 00:15:16.150 ] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "vmd", 00:15:16.150 "config": [] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "accel", 00:15:16.150 "config": [ 00:15:16.150 { 00:15:16.150 "method": "accel_set_options", 00:15:16.150 "params": { 00:15:16.150 "small_cache_size": 128, 00:15:16.150 "large_cache_size": 16, 00:15:16.150 "task_count": 2048, 00:15:16.150 "sequence_count": 2048, 00:15:16.150 "buf_count": 2048 00:15:16.150 } 00:15:16.150 } 00:15:16.150 ] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "bdev", 00:15:16.150 "config": [ 00:15:16.150 { 00:15:16.150 "method": "bdev_set_options", 00:15:16.150 "params": { 00:15:16.150 "bdev_io_pool_size": 65535, 00:15:16.150 "bdev_io_cache_size": 256, 00:15:16.150 "bdev_auto_examine": true, 00:15:16.150 "iobuf_small_cache_size": 128, 00:15:16.150 "iobuf_large_cache_size": 16 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "bdev_raid_set_options", 00:15:16.150 "params": { 00:15:16.150 "process_window_size_kb": 1024, 00:15:16.150 "process_max_bandwidth_mb_sec": 0 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "bdev_iscsi_set_options", 00:15:16.150 "params": { 00:15:16.150 "timeout_sec": 30 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "bdev_nvme_set_options", 00:15:16.150 "params": { 00:15:16.150 "action_on_timeout": "none", 00:15:16.150 "timeout_us": 0, 00:15:16.150 "timeout_admin_us": 0, 00:15:16.150 "keep_alive_timeout_ms": 10000, 00:15:16.150 "arbitration_burst": 0, 00:15:16.150 "low_priority_weight": 0, 00:15:16.150 "medium_priority_weight": 0, 00:15:16.150 "high_priority_weight": 0, 00:15:16.150 "nvme_adminq_poll_period_us": 10000, 00:15:16.150 "nvme_ioq_poll_period_us": 0, 00:15:16.150 "io_queue_requests": 0, 00:15:16.150 "delay_cmd_submit": true, 00:15:16.150 "transport_retry_count": 4, 00:15:16.150 "bdev_retry_count": 3, 00:15:16.150 "transport_ack_timeout": 0, 00:15:16.150 "ctrlr_loss_timeout_sec": 0, 00:15:16.150 "reconnect_delay_sec": 0, 00:15:16.150 "fast_io_fail_timeout_sec": 0, 00:15:16.150 "disable_auto_failback": false, 00:15:16.150 "generate_uuids": false, 00:15:16.150 "transport_tos": 0, 00:15:16.150 "nvme_error_stat": false, 00:15:16.150 "rdma_srq_size": 0, 00:15:16.150 "io_path_stat": false, 00:15:16.150 "allow_accel_sequence": false, 00:15:16.150 "rdma_max_cq_size": 0, 00:15:16.150 "rdma_cm_event_timeout_ms": 0, 00:15:16.150 "dhchap_digests": [ 00:15:16.150 "sha256", 00:15:16.150 "sha384", 00:15:16.150 "sha512" 00:15:16.150 ], 00:15:16.150 "dhchap_dhgroups": [ 00:15:16.150 "null", 00:15:16.150 "ffdhe2048", 00:15:16.150 "ffdhe3072", 00:15:16.150 "ffdhe4096", 00:15:16.150 "ffdhe6144", 00:15:16.150 "ffdhe8192" 00:15:16.150 ] 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "bdev_nvme_set_hotplug", 00:15:16.150 "params": { 00:15:16.150 "period_us": 100000, 00:15:16.150 "enable": false 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "bdev_wait_for_examine" 00:15:16.150 } 00:15:16.150 ] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "scsi", 00:15:16.150 "config": null 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "scheduler", 00:15:16.150 "config": [ 00:15:16.150 { 00:15:16.150 "method": "framework_set_scheduler", 00:15:16.150 "params": { 00:15:16.150 "name": "static" 00:15:16.150 } 00:15:16.150 } 00:15:16.150 ] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "vhost_scsi", 00:15:16.150 "config": [] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "vhost_blk", 00:15:16.150 "config": [] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "ublk", 00:15:16.150 "config": [] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "nbd", 00:15:16.150 "config": [] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "nvmf", 00:15:16.150 "config": [ 00:15:16.150 { 00:15:16.150 "method": "nvmf_set_config", 00:15:16.150 "params": { 00:15:16.150 "discovery_filter": "match_any", 00:15:16.150 "admin_cmd_passthru": { 00:15:16.150 "identify_ctrlr": false 00:15:16.150 } 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "nvmf_set_max_subsystems", 00:15:16.150 "params": { 00:15:16.150 "max_subsystems": 1024 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "nvmf_set_crdt", 00:15:16.150 "params": { 00:15:16.150 "crdt1": 0, 00:15:16.150 "crdt2": 0, 00:15:16.150 "crdt3": 0 00:15:16.150 } 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "method": "nvmf_create_transport", 00:15:16.150 "params": { 00:15:16.150 "trtype": "TCP", 00:15:16.150 "max_queue_depth": 128, 00:15:16.150 "max_io_qpairs_per_ctrlr": 127, 00:15:16.150 "in_capsule_data_size": 4096, 00:15:16.150 "max_io_size": 131072, 00:15:16.150 "io_unit_size": 131072, 00:15:16.150 "max_aq_depth": 128, 00:15:16.150 "num_shared_buffers": 511, 00:15:16.150 "buf_cache_size": 4294967295, 00:15:16.150 "dif_insert_or_strip": false, 00:15:16.150 "zcopy": false, 00:15:16.150 "c2h_success": true, 00:15:16.150 "sock_priority": 0, 00:15:16.150 "abort_timeout_sec": 1, 00:15:16.150 "ack_timeout": 0, 00:15:16.150 "data_wr_pool_size": 0 00:15:16.150 } 00:15:16.150 } 00:15:16.150 ] 00:15:16.150 }, 00:15:16.150 { 00:15:16.150 "subsystem": "iscsi", 00:15:16.150 "config": [ 00:15:16.150 { 00:15:16.150 "method": "iscsi_set_options", 00:15:16.150 "params": { 00:15:16.150 "node_base": "iqn.2016-06.io.spdk", 00:15:16.150 "max_sessions": 128, 00:15:16.150 "max_connections_per_session": 2, 00:15:16.150 "max_queue_depth": 64, 00:15:16.150 "default_time2wait": 2, 00:15:16.150 "default_time2retain": 20, 00:15:16.150 "first_burst_length": 8192, 00:15:16.150 "immediate_data": true, 00:15:16.150 "allow_duplicated_isid": false, 00:15:16.150 "error_recovery_level": 0, 00:15:16.150 "nop_timeout": 60, 00:15:16.150 "nop_in_interval": 30, 00:15:16.150 "disable_chap": false, 00:15:16.150 "require_chap": false, 00:15:16.150 "mutual_chap": false, 00:15:16.150 "chap_group": 0, 00:15:16.150 "max_large_datain_per_connection": 64, 00:15:16.150 "max_r2t_per_connection": 4, 00:15:16.150 "pdu_pool_size": 36864, 00:15:16.150 "immediate_data_pool_size": 16384, 00:15:16.150 "data_out_pool_size": 2048 00:15:16.150 } 00:15:16.150 } 00:15:16.150 ] 00:15:16.150 } 00:15:16.150 ] 00:15:16.150 } 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62542 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62542 ']' 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62542 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62542 00:15:16.150 killing process with pid 62542 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.150 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62542' 00:15:16.151 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62542 00:15:16.151 11:44:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62542 00:15:18.677 11:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62598 00:15:18.677 11:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:18.677 11:44:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62598 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62598 ']' 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62598 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62598 00:15:23.939 killing process with pid 62598 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62598' 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62598 00:15:23.939 11:44:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62598 00:15:25.312 11:44:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:25.312 11:44:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:25.312 ************************************ 00:15:25.312 END TEST skip_rpc_with_json 00:15:25.312 ************************************ 00:15:25.312 00:15:25.312 real 0m10.785s 00:15:25.312 user 0m10.529s 00:15:25.312 sys 0m0.755s 00:15:25.312 11:44:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.312 11:44:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:25.571 11:44:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:25.571 11:44:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:25.571 11:44:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.571 11:44:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.571 ************************************ 00:15:25.571 START TEST skip_rpc_with_delay 00:15:25.571 ************************************ 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:25.571 [2024-07-25 11:44:22.508103] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:25.571 [2024-07-25 11:44:22.508284] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:25.571 00:15:25.571 real 0m0.191s 00:15:25.571 user 0m0.104s 00:15:25.571 sys 0m0.084s 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.571 11:44:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:15:25.571 ************************************ 00:15:25.571 END TEST skip_rpc_with_delay 00:15:25.571 ************************************ 00:15:25.571 11:44:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:15:25.571 11:44:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:25.571 11:44:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:25.571 11:44:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:25.571 11:44:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.571 11:44:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.571 ************************************ 00:15:25.571 START TEST exit_on_failed_rpc_init 00:15:25.571 ************************************ 00:15:25.571 11:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:15:25.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62726 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62726 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62726 ']' 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.829 11:44:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:25.829 [2024-07-25 11:44:22.715055] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:25.829 [2024-07-25 11:44:22.715906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62726 ] 00:15:26.087 [2024-07-25 11:44:22.877742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.345 [2024-07-25 11:44:23.138809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:26.910 11:44:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:27.168 [2024-07-25 11:44:24.000794] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:27.168 [2024-07-25 11:44:24.001238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62744 ] 00:15:27.426 [2024-07-25 11:44:24.211289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.684 [2024-07-25 11:44:24.485974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.684 [2024-07-25 11:44:24.486359] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:27.684 [2024-07-25 11:44:24.486394] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:27.684 [2024-07-25 11:44:24.486412] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62726 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62726 ']' 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62726 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62726 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62726' 00:15:27.942 killing process with pid 62726 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62726 00:15:27.942 11:44:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62726 00:15:30.471 00:15:30.471 real 0m4.479s 00:15:30.471 user 0m5.376s 00:15:30.471 sys 0m0.556s 00:15:30.471 11:44:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.471 11:44:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:30.471 ************************************ 00:15:30.471 END TEST exit_on_failed_rpc_init 00:15:30.471 ************************************ 00:15:30.471 11:44:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:30.471 ************************************ 00:15:30.471 END TEST skip_rpc 00:15:30.471 ************************************ 00:15:30.471 00:15:30.471 real 0m22.907s 00:15:30.471 user 0m22.897s 00:15:30.471 sys 0m1.831s 00:15:30.471 11:44:27 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.471 11:44:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.471 11:44:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:30.471 11:44:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:30.471 11:44:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.471 11:44:27 -- common/autotest_common.sh@10 -- # set +x 00:15:30.471 ************************************ 00:15:30.471 START TEST rpc_client 00:15:30.471 ************************************ 00:15:30.471 11:44:27 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:30.471 * Looking for test storage... 00:15:30.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:30.471 11:44:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:30.471 OK 00:15:30.471 11:44:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:30.471 00:15:30.471 real 0m0.126s 00:15:30.471 user 0m0.057s 00:15:30.471 sys 0m0.073s 00:15:30.471 11:44:27 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.471 11:44:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:15:30.471 ************************************ 00:15:30.471 END TEST rpc_client 00:15:30.471 ************************************ 00:15:30.471 11:44:27 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:30.471 11:44:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:30.471 11:44:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.471 11:44:27 -- common/autotest_common.sh@10 -- # set +x 00:15:30.471 ************************************ 00:15:30.471 START TEST json_config 00:15:30.471 ************************************ 00:15:30.471 11:44:27 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:49af732a-113d-4feb-846e-4f875fd14a22 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=49af732a-113d-4feb-846e-4f875fd14a22 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.471 11:44:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.471 11:44:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.471 11:44:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.471 11:44:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.471 11:44:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.471 11:44:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.471 11:44:27 json_config -- paths/export.sh@5 -- # export PATH 00:15:30.471 11:44:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@47 -- # : 0 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.471 11:44:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:30.471 WARNING: No tests are enabled so not running JSON configuration tests 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:15:30.471 11:44:27 json_config -- json_config/json_config.sh@28 -- # exit 0 00:15:30.471 00:15:30.471 real 0m0.070s 00:15:30.471 user 0m0.034s 00:15:30.471 sys 0m0.032s 00:15:30.471 ************************************ 00:15:30.472 END TEST json_config 00:15:30.472 ************************************ 00:15:30.472 11:44:27 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.472 11:44:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:30.472 11:44:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:30.472 11:44:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:30.472 11:44:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.472 11:44:27 -- common/autotest_common.sh@10 -- # set +x 00:15:30.472 ************************************ 00:15:30.472 START TEST json_config_extra_key 00:15:30.472 ************************************ 00:15:30.472 11:44:27 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:49af732a-113d-4feb-846e-4f875fd14a22 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=49af732a-113d-4feb-846e-4f875fd14a22 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.472 11:44:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.472 11:44:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.472 11:44:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.472 11:44:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.472 11:44:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.472 11:44:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.472 11:44:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:15:30.472 11:44:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.472 11:44:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:30.472 INFO: launching applications... 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:30.472 11:44:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62930 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:30.472 Waiting for target to run... 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:30.472 11:44:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62930 /var/tmp/spdk_tgt.sock 00:15:30.472 11:44:27 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 62930 ']' 00:15:30.472 11:44:27 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:30.472 11:44:27 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.472 11:44:27 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:30.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:30.472 11:44:27 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.472 11:44:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:30.730 [2024-07-25 11:44:27.596116] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:30.730 [2024-07-25 11:44:27.596490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62930 ] 00:15:30.988 [2024-07-25 11:44:27.912536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.246 [2024-07-25 11:44:28.089539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.811 11:44:28 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.811 11:44:28 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:15:31.811 00:15:31.811 INFO: shutting down applications... 00:15:31.811 11:44:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:31.811 11:44:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62930 ]] 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62930 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62930 00:15:31.811 11:44:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:32.376 11:44:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:32.376 11:44:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:32.376 11:44:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62930 00:15:32.376 11:44:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:32.941 11:44:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:32.941 11:44:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:32.941 11:44:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62930 00:15:32.941 11:44:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:33.199 11:44:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:33.199 11:44:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:33.199 11:44:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62930 00:15:33.199 11:44:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:33.765 11:44:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:33.765 11:44:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:33.765 11:44:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62930 00:15:33.765 11:44:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:34.329 11:44:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:34.329 11:44:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:34.329 11:44:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62930 00:15:34.329 SPDK target shutdown done 00:15:34.329 Success 00:15:34.329 11:44:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:34.329 11:44:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:15:34.329 11:44:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:34.329 11:44:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:34.329 11:44:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:34.329 ************************************ 00:15:34.329 END TEST json_config_extra_key 00:15:34.329 ************************************ 00:15:34.329 00:15:34.329 real 0m3.793s 00:15:34.329 user 0m3.768s 00:15:34.329 sys 0m0.440s 00:15:34.329 11:44:31 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.329 11:44:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:34.329 11:44:31 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:34.329 11:44:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:34.329 11:44:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.329 11:44:31 -- common/autotest_common.sh@10 -- # set +x 00:15:34.329 ************************************ 00:15:34.329 START TEST alias_rpc 00:15:34.329 ************************************ 00:15:34.329 11:44:31 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:34.329 * Looking for test storage... 00:15:34.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:34.329 11:44:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:34.329 11:44:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63022 00:15:34.329 11:44:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:34.329 11:44:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63022 00:15:34.329 11:44:31 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 63022 ']' 00:15:34.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.329 11:44:31 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.329 11:44:31 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.329 11:44:31 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.329 11:44:31 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.329 11:44:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.586 [2024-07-25 11:44:31.468110] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:34.586 [2024-07-25 11:44:31.468336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63022 ] 00:15:34.843 [2024-07-25 11:44:31.644407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.101 [2024-07-25 11:44:31.897626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.667 11:44:32 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.667 11:44:32 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:35.667 11:44:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:35.925 11:44:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63022 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 63022 ']' 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 63022 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63022 00:15:35.925 killing process with pid 63022 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63022' 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@969 -- # kill 63022 00:15:35.925 11:44:32 alias_rpc -- common/autotest_common.sh@974 -- # wait 63022 00:15:38.466 ************************************ 00:15:38.466 END TEST alias_rpc 00:15:38.466 ************************************ 00:15:38.466 00:15:38.466 real 0m3.773s 00:15:38.466 user 0m4.048s 00:15:38.466 sys 0m0.454s 00:15:38.466 11:44:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.466 11:44:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.466 11:44:35 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:15:38.466 11:44:35 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:38.466 11:44:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:38.466 11:44:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.466 11:44:35 -- common/autotest_common.sh@10 -- # set +x 00:15:38.466 ************************************ 00:15:38.466 START TEST spdkcli_tcp 00:15:38.466 ************************************ 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:38.466 * Looking for test storage... 00:15:38.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63121 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63121 00:15:38.466 11:44:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 63121 ']' 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.466 11:44:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.466 [2024-07-25 11:44:35.306204] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:38.466 [2024-07-25 11:44:35.306473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63121 ] 00:15:38.466 [2024-07-25 11:44:35.486366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.724 [2024-07-25 11:44:35.675903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.724 [2024-07-25 11:44:35.675903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.657 11:44:36 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.657 11:44:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:15:39.657 11:44:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63138 00:15:39.657 11:44:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:15:39.657 11:44:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:15:39.657 [ 00:15:39.657 "bdev_malloc_delete", 00:15:39.657 "bdev_malloc_create", 00:15:39.657 "bdev_null_resize", 00:15:39.657 "bdev_null_delete", 00:15:39.657 "bdev_null_create", 00:15:39.657 "bdev_nvme_cuse_unregister", 00:15:39.657 "bdev_nvme_cuse_register", 00:15:39.657 "bdev_opal_new_user", 00:15:39.657 "bdev_opal_set_lock_state", 00:15:39.657 "bdev_opal_delete", 00:15:39.657 "bdev_opal_get_info", 00:15:39.657 "bdev_opal_create", 00:15:39.657 "bdev_nvme_opal_revert", 00:15:39.657 "bdev_nvme_opal_init", 00:15:39.657 "bdev_nvme_send_cmd", 00:15:39.657 "bdev_nvme_get_path_iostat", 00:15:39.657 "bdev_nvme_get_mdns_discovery_info", 00:15:39.657 "bdev_nvme_stop_mdns_discovery", 00:15:39.657 "bdev_nvme_start_mdns_discovery", 00:15:39.657 "bdev_nvme_set_multipath_policy", 00:15:39.657 "bdev_nvme_set_preferred_path", 00:15:39.657 "bdev_nvme_get_io_paths", 00:15:39.657 "bdev_nvme_remove_error_injection", 00:15:39.657 "bdev_nvme_add_error_injection", 00:15:39.657 "bdev_nvme_get_discovery_info", 00:15:39.657 "bdev_nvme_stop_discovery", 00:15:39.657 "bdev_nvme_start_discovery", 00:15:39.657 "bdev_nvme_get_controller_health_info", 00:15:39.657 "bdev_nvme_disable_controller", 00:15:39.657 "bdev_nvme_enable_controller", 00:15:39.657 "bdev_nvme_reset_controller", 00:15:39.657 "bdev_nvme_get_transport_statistics", 00:15:39.657 "bdev_nvme_apply_firmware", 00:15:39.657 "bdev_nvme_detach_controller", 00:15:39.657 "bdev_nvme_get_controllers", 00:15:39.657 "bdev_nvme_attach_controller", 00:15:39.657 "bdev_nvme_set_hotplug", 00:15:39.657 "bdev_nvme_set_options", 00:15:39.657 "bdev_passthru_delete", 00:15:39.657 "bdev_passthru_create", 00:15:39.657 "bdev_lvol_set_parent_bdev", 00:15:39.657 "bdev_lvol_set_parent", 00:15:39.657 "bdev_lvol_check_shallow_copy", 00:15:39.657 "bdev_lvol_start_shallow_copy", 00:15:39.657 "bdev_lvol_grow_lvstore", 00:15:39.657 "bdev_lvol_get_lvols", 00:15:39.657 "bdev_lvol_get_lvstores", 00:15:39.657 "bdev_lvol_delete", 00:15:39.657 "bdev_lvol_set_read_only", 00:15:39.657 "bdev_lvol_resize", 00:15:39.657 "bdev_lvol_decouple_parent", 00:15:39.657 "bdev_lvol_inflate", 00:15:39.657 "bdev_lvol_rename", 00:15:39.657 "bdev_lvol_clone_bdev", 00:15:39.657 "bdev_lvol_clone", 00:15:39.657 "bdev_lvol_snapshot", 00:15:39.657 "bdev_lvol_create", 00:15:39.657 "bdev_lvol_delete_lvstore", 00:15:39.657 "bdev_lvol_rename_lvstore", 00:15:39.657 "bdev_lvol_create_lvstore", 00:15:39.657 "bdev_raid_set_options", 00:15:39.657 "bdev_raid_remove_base_bdev", 00:15:39.657 "bdev_raid_add_base_bdev", 00:15:39.657 "bdev_raid_delete", 00:15:39.657 "bdev_raid_create", 00:15:39.657 "bdev_raid_get_bdevs", 00:15:39.657 "bdev_error_inject_error", 00:15:39.657 "bdev_error_delete", 00:15:39.657 "bdev_error_create", 00:15:39.657 "bdev_split_delete", 00:15:39.657 "bdev_split_create", 00:15:39.657 "bdev_delay_delete", 00:15:39.657 "bdev_delay_create", 00:15:39.657 "bdev_delay_update_latency", 00:15:39.658 "bdev_zone_block_delete", 00:15:39.658 "bdev_zone_block_create", 00:15:39.658 "blobfs_create", 00:15:39.658 "blobfs_detect", 00:15:39.658 "blobfs_set_cache_size", 00:15:39.658 "bdev_xnvme_delete", 00:15:39.658 "bdev_xnvme_create", 00:15:39.658 "bdev_aio_delete", 00:15:39.658 "bdev_aio_rescan", 00:15:39.658 "bdev_aio_create", 00:15:39.658 "bdev_ftl_set_property", 00:15:39.658 "bdev_ftl_get_properties", 00:15:39.658 "bdev_ftl_get_stats", 00:15:39.658 "bdev_ftl_unmap", 00:15:39.658 "bdev_ftl_unload", 00:15:39.658 "bdev_ftl_delete", 00:15:39.658 "bdev_ftl_load", 00:15:39.658 "bdev_ftl_create", 00:15:39.658 "bdev_virtio_attach_controller", 00:15:39.658 "bdev_virtio_scsi_get_devices", 00:15:39.658 "bdev_virtio_detach_controller", 00:15:39.658 "bdev_virtio_blk_set_hotplug", 00:15:39.658 "bdev_iscsi_delete", 00:15:39.658 "bdev_iscsi_create", 00:15:39.658 "bdev_iscsi_set_options", 00:15:39.658 "accel_error_inject_error", 00:15:39.658 "ioat_scan_accel_module", 00:15:39.658 "dsa_scan_accel_module", 00:15:39.658 "iaa_scan_accel_module", 00:15:39.658 "keyring_file_remove_key", 00:15:39.658 "keyring_file_add_key", 00:15:39.658 "keyring_linux_set_options", 00:15:39.658 "iscsi_get_histogram", 00:15:39.658 "iscsi_enable_histogram", 00:15:39.658 "iscsi_set_options", 00:15:39.658 "iscsi_get_auth_groups", 00:15:39.658 "iscsi_auth_group_remove_secret", 00:15:39.658 "iscsi_auth_group_add_secret", 00:15:39.658 "iscsi_delete_auth_group", 00:15:39.658 "iscsi_create_auth_group", 00:15:39.658 "iscsi_set_discovery_auth", 00:15:39.658 "iscsi_get_options", 00:15:39.658 "iscsi_target_node_request_logout", 00:15:39.658 "iscsi_target_node_set_redirect", 00:15:39.658 "iscsi_target_node_set_auth", 00:15:39.658 "iscsi_target_node_add_lun", 00:15:39.658 "iscsi_get_stats", 00:15:39.658 "iscsi_get_connections", 00:15:39.658 "iscsi_portal_group_set_auth", 00:15:39.658 "iscsi_start_portal_group", 00:15:39.658 "iscsi_delete_portal_group", 00:15:39.658 "iscsi_create_portal_group", 00:15:39.658 "iscsi_get_portal_groups", 00:15:39.658 "iscsi_delete_target_node", 00:15:39.658 "iscsi_target_node_remove_pg_ig_maps", 00:15:39.658 "iscsi_target_node_add_pg_ig_maps", 00:15:39.658 "iscsi_create_target_node", 00:15:39.658 "iscsi_get_target_nodes", 00:15:39.658 "iscsi_delete_initiator_group", 00:15:39.658 "iscsi_initiator_group_remove_initiators", 00:15:39.658 "iscsi_initiator_group_add_initiators", 00:15:39.658 "iscsi_create_initiator_group", 00:15:39.658 "iscsi_get_initiator_groups", 00:15:39.658 "nvmf_set_crdt", 00:15:39.658 "nvmf_set_config", 00:15:39.658 "nvmf_set_max_subsystems", 00:15:39.658 "nvmf_stop_mdns_prr", 00:15:39.658 "nvmf_publish_mdns_prr", 00:15:39.658 "nvmf_subsystem_get_listeners", 00:15:39.658 "nvmf_subsystem_get_qpairs", 00:15:39.658 "nvmf_subsystem_get_controllers", 00:15:39.658 "nvmf_get_stats", 00:15:39.658 "nvmf_get_transports", 00:15:39.658 "nvmf_create_transport", 00:15:39.658 "nvmf_get_targets", 00:15:39.658 "nvmf_delete_target", 00:15:39.658 "nvmf_create_target", 00:15:39.658 "nvmf_subsystem_allow_any_host", 00:15:39.658 "nvmf_subsystem_remove_host", 00:15:39.658 "nvmf_subsystem_add_host", 00:15:39.658 "nvmf_ns_remove_host", 00:15:39.658 "nvmf_ns_add_host", 00:15:39.658 "nvmf_subsystem_remove_ns", 00:15:39.658 "nvmf_subsystem_add_ns", 00:15:39.658 "nvmf_subsystem_listener_set_ana_state", 00:15:39.658 "nvmf_discovery_get_referrals", 00:15:39.658 "nvmf_discovery_remove_referral", 00:15:39.658 "nvmf_discovery_add_referral", 00:15:39.658 "nvmf_subsystem_remove_listener", 00:15:39.658 "nvmf_subsystem_add_listener", 00:15:39.658 "nvmf_delete_subsystem", 00:15:39.658 "nvmf_create_subsystem", 00:15:39.658 "nvmf_get_subsystems", 00:15:39.658 "env_dpdk_get_mem_stats", 00:15:39.658 "nbd_get_disks", 00:15:39.658 "nbd_stop_disk", 00:15:39.658 "nbd_start_disk", 00:15:39.658 "ublk_recover_disk", 00:15:39.658 "ublk_get_disks", 00:15:39.658 "ublk_stop_disk", 00:15:39.658 "ublk_start_disk", 00:15:39.658 "ublk_destroy_target", 00:15:39.658 "ublk_create_target", 00:15:39.658 "virtio_blk_create_transport", 00:15:39.658 "virtio_blk_get_transports", 00:15:39.658 "vhost_controller_set_coalescing", 00:15:39.658 "vhost_get_controllers", 00:15:39.658 "vhost_delete_controller", 00:15:39.658 "vhost_create_blk_controller", 00:15:39.658 "vhost_scsi_controller_remove_target", 00:15:39.658 "vhost_scsi_controller_add_target", 00:15:39.658 "vhost_start_scsi_controller", 00:15:39.658 "vhost_create_scsi_controller", 00:15:39.658 "thread_set_cpumask", 00:15:39.658 "framework_get_governor", 00:15:39.658 "framework_get_scheduler", 00:15:39.658 "framework_set_scheduler", 00:15:39.658 "framework_get_reactors", 00:15:39.658 "thread_get_io_channels", 00:15:39.658 "thread_get_pollers", 00:15:39.658 "thread_get_stats", 00:15:39.658 "framework_monitor_context_switch", 00:15:39.658 "spdk_kill_instance", 00:15:39.658 "log_enable_timestamps", 00:15:39.658 "log_get_flags", 00:15:39.658 "log_clear_flag", 00:15:39.658 "log_set_flag", 00:15:39.658 "log_get_level", 00:15:39.658 "log_set_level", 00:15:39.658 "log_get_print_level", 00:15:39.658 "log_set_print_level", 00:15:39.658 "framework_enable_cpumask_locks", 00:15:39.658 "framework_disable_cpumask_locks", 00:15:39.658 "framework_wait_init", 00:15:39.658 "framework_start_init", 00:15:39.658 "scsi_get_devices", 00:15:39.658 "bdev_get_histogram", 00:15:39.658 "bdev_enable_histogram", 00:15:39.658 "bdev_set_qos_limit", 00:15:39.658 "bdev_set_qd_sampling_period", 00:15:39.658 "bdev_get_bdevs", 00:15:39.658 "bdev_reset_iostat", 00:15:39.658 "bdev_get_iostat", 00:15:39.658 "bdev_examine", 00:15:39.658 "bdev_wait_for_examine", 00:15:39.658 "bdev_set_options", 00:15:39.658 "notify_get_notifications", 00:15:39.658 "notify_get_types", 00:15:39.658 "accel_get_stats", 00:15:39.658 "accel_set_options", 00:15:39.658 "accel_set_driver", 00:15:39.658 "accel_crypto_key_destroy", 00:15:39.658 "accel_crypto_keys_get", 00:15:39.658 "accel_crypto_key_create", 00:15:39.658 "accel_assign_opc", 00:15:39.658 "accel_get_module_info", 00:15:39.658 "accel_get_opc_assignments", 00:15:39.658 "vmd_rescan", 00:15:39.658 "vmd_remove_device", 00:15:39.658 "vmd_enable", 00:15:39.658 "sock_get_default_impl", 00:15:39.658 "sock_set_default_impl", 00:15:39.658 "sock_impl_set_options", 00:15:39.658 "sock_impl_get_options", 00:15:39.658 "iobuf_get_stats", 00:15:39.658 "iobuf_set_options", 00:15:39.658 "framework_get_pci_devices", 00:15:39.658 "framework_get_config", 00:15:39.658 "framework_get_subsystems", 00:15:39.658 "trace_get_info", 00:15:39.658 "trace_get_tpoint_group_mask", 00:15:39.658 "trace_disable_tpoint_group", 00:15:39.658 "trace_enable_tpoint_group", 00:15:39.658 "trace_clear_tpoint_mask", 00:15:39.658 "trace_set_tpoint_mask", 00:15:39.658 "keyring_get_keys", 00:15:39.658 "spdk_get_version", 00:15:39.658 "rpc_get_methods" 00:15:39.658 ] 00:15:39.916 11:44:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:39.916 11:44:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:39.916 11:44:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63121 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 63121 ']' 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 63121 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63121 00:15:39.916 killing process with pid 63121 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63121' 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 63121 00:15:39.916 11:44:36 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 63121 00:15:41.853 ************************************ 00:15:41.853 END TEST spdkcli_tcp 00:15:41.853 ************************************ 00:15:41.853 00:15:41.853 real 0m3.775s 00:15:41.853 user 0m6.782s 00:15:41.853 sys 0m0.509s 00:15:41.853 11:44:38 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.853 11:44:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:42.112 11:44:38 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:42.112 11:44:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:42.112 11:44:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.112 11:44:38 -- common/autotest_common.sh@10 -- # set +x 00:15:42.112 ************************************ 00:15:42.112 START TEST dpdk_mem_utility 00:15:42.112 ************************************ 00:15:42.112 11:44:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:42.112 * Looking for test storage... 00:15:42.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:42.112 11:44:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:42.112 11:44:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63229 00:15:42.112 11:44:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63229 00:15:42.112 11:44:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.112 11:44:38 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 63229 ']' 00:15:42.112 11:44:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.112 11:44:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.112 11:44:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.112 11:44:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.112 11:44:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:42.112 [2024-07-25 11:44:39.099561] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:42.112 [2024-07-25 11:44:39.100003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:15:42.370 [2024-07-25 11:44:39.276198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.628 [2024-07-25 11:44:39.515395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.564 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.565 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:15:43.565 11:44:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:43.565 11:44:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:43.565 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.565 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:43.565 { 00:15:43.565 "filename": "/tmp/spdk_mem_dump.txt" 00:15:43.565 } 00:15:43.565 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.565 11:44:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:43.565 DPDK memory size 820.000000 MiB in 1 heap(s) 00:15:43.565 1 heaps totaling size 820.000000 MiB 00:15:43.565 size: 820.000000 MiB heap id: 0 00:15:43.565 end heaps---------- 00:15:43.565 8 mempools totaling size 598.116089 MiB 00:15:43.565 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:43.565 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:43.565 size: 84.521057 MiB name: bdev_io_63229 00:15:43.565 size: 51.011292 MiB name: evtpool_63229 00:15:43.565 size: 50.003479 MiB name: msgpool_63229 00:15:43.565 size: 21.763794 MiB name: PDU_Pool 00:15:43.565 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:43.565 size: 0.026123 MiB name: Session_Pool 00:15:43.565 end mempools------- 00:15:43.565 6 memzones totaling size 4.142822 MiB 00:15:43.565 size: 1.000366 MiB name: RG_ring_0_63229 00:15:43.565 size: 1.000366 MiB name: RG_ring_1_63229 00:15:43.565 size: 1.000366 MiB name: RG_ring_4_63229 00:15:43.565 size: 1.000366 MiB name: RG_ring_5_63229 00:15:43.565 size: 0.125366 MiB name: RG_ring_2_63229 00:15:43.565 size: 0.015991 MiB name: RG_ring_3_63229 00:15:43.565 end memzones------- 00:15:43.565 11:44:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:43.565 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:15:43.565 list of free elements. size: 18.451538 MiB 00:15:43.565 element at address: 0x200000400000 with size: 1.999451 MiB 00:15:43.565 element at address: 0x200000800000 with size: 1.996887 MiB 00:15:43.565 element at address: 0x200007000000 with size: 1.995972 MiB 00:15:43.565 element at address: 0x20000b200000 with size: 1.995972 MiB 00:15:43.565 element at address: 0x200019100040 with size: 0.999939 MiB 00:15:43.565 element at address: 0x200019500040 with size: 0.999939 MiB 00:15:43.565 element at address: 0x200019600000 with size: 0.999084 MiB 00:15:43.565 element at address: 0x200003e00000 with size: 0.996094 MiB 00:15:43.565 element at address: 0x200032200000 with size: 0.994324 MiB 00:15:43.565 element at address: 0x200018e00000 with size: 0.959656 MiB 00:15:43.565 element at address: 0x200019900040 with size: 0.936401 MiB 00:15:43.565 element at address: 0x200000200000 with size: 0.829956 MiB 00:15:43.565 element at address: 0x20001b000000 with size: 0.564148 MiB 00:15:43.565 element at address: 0x200019200000 with size: 0.487976 MiB 00:15:43.565 element at address: 0x200019a00000 with size: 0.485413 MiB 00:15:43.565 element at address: 0x200013800000 with size: 0.467896 MiB 00:15:43.565 element at address: 0x200028400000 with size: 0.390442 MiB 00:15:43.565 element at address: 0x200003a00000 with size: 0.351990 MiB 00:15:43.565 list of standard malloc elements. size: 199.284058 MiB 00:15:43.565 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:15:43.565 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:15:43.565 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:15:43.565 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:15:43.565 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:15:43.565 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:15:43.565 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:15:43.565 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:15:43.565 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:15:43.565 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:15:43.565 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:15:43.565 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003aff980 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003affa80 with size: 0.000244 MiB 00:15:43.565 element at address: 0x200003eff000 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:15:43.565 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013877c80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013877d80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013877e80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013877f80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013878080 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013878180 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013878280 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013878380 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013878480 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200013878580 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200019abc680 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200028463f40 with size: 0.000244 MiB 00:15:43.566 element at address: 0x200028464040 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846af80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b080 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b180 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b280 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b380 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b480 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b580 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b680 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b780 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b880 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846b980 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846be80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:15:43.566 element at address: 0x20002846c080 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c180 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c280 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c380 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c480 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c580 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c680 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c780 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c880 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846c980 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d080 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d180 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d280 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d380 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d480 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d580 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d680 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d780 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d880 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846d980 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846da80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846db80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846de80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846df80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e080 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e180 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e280 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e380 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e480 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e580 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e680 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e780 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e880 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846e980 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f080 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f180 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f280 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f380 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f480 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f580 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f680 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f780 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f880 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846f980 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:15:43.567 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:15:43.567 list of memzone associated elements. size: 602.264404 MiB 00:15:43.567 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:15:43.567 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:43.567 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:15:43.567 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:43.567 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:15:43.567 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63229_0 00:15:43.567 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:15:43.567 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63229_0 00:15:43.567 element at address: 0x200003fff340 with size: 48.003113 MiB 00:15:43.567 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63229_0 00:15:43.567 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:15:43.567 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:43.567 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:15:43.567 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:43.567 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:15:43.567 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63229 00:15:43.567 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:15:43.567 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63229 00:15:43.567 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:15:43.567 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63229 00:15:43.567 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:15:43.567 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:43.567 element at address: 0x200019abc780 with size: 1.008179 MiB 00:15:43.567 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:43.567 element at address: 0x200018efde00 with size: 1.008179 MiB 00:15:43.567 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:43.567 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:15:43.567 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:43.567 element at address: 0x200003eff100 with size: 1.000549 MiB 00:15:43.567 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63229 00:15:43.567 element at address: 0x200003affb80 with size: 1.000549 MiB 00:15:43.567 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63229 00:15:43.567 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:15:43.567 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63229 00:15:43.567 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:15:43.567 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63229 00:15:43.567 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:15:43.567 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63229 00:15:43.567 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:15:43.567 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:43.567 element at address: 0x200013878680 with size: 0.500549 MiB 00:15:43.567 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:43.567 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:15:43.567 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:43.567 element at address: 0x200003adf740 with size: 0.125549 MiB 00:15:43.567 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63229 00:15:43.567 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:15:43.567 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:43.567 element at address: 0x200028464140 with size: 0.023804 MiB 00:15:43.567 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:43.567 element at address: 0x200003adb500 with size: 0.016174 MiB 00:15:43.567 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63229 00:15:43.567 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:15:43.567 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:43.567 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:15:43.567 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63229 00:15:43.567 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:15:43.567 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63229 00:15:43.567 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:15:43.567 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:43.567 11:44:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:43.567 11:44:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63229 00:15:43.567 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 63229 ']' 00:15:43.567 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 63229 00:15:43.567 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:15:43.567 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.567 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63229 00:15:43.567 killing process with pid 63229 00:15:43.567 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.568 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.568 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63229' 00:15:43.568 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 63229 00:15:43.568 11:44:40 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 63229 00:15:46.101 00:15:46.101 real 0m3.636s 00:15:46.101 user 0m3.792s 00:15:46.101 sys 0m0.435s 00:15:46.101 ************************************ 00:15:46.101 END TEST dpdk_mem_utility 00:15:46.101 ************************************ 00:15:46.101 11:44:42 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.101 11:44:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:46.101 11:44:42 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:46.101 11:44:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:46.101 11:44:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.101 11:44:42 -- common/autotest_common.sh@10 -- # set +x 00:15:46.101 ************************************ 00:15:46.101 START TEST event 00:15:46.101 ************************************ 00:15:46.101 11:44:42 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:46.101 * Looking for test storage... 00:15:46.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:46.101 11:44:42 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:46.101 11:44:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:15:46.101 11:44:42 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:46.101 11:44:42 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:15:46.101 11:44:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.101 11:44:42 event -- common/autotest_common.sh@10 -- # set +x 00:15:46.101 ************************************ 00:15:46.101 START TEST event_perf 00:15:46.101 ************************************ 00:15:46.101 11:44:42 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:46.101 Running I/O for 1 seconds...[2024-07-25 11:44:42.706273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:46.101 [2024-07-25 11:44:42.706632] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63324 ] 00:15:46.101 [2024-07-25 11:44:42.887196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.101 [2024-07-25 11:44:43.089495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.101 [2024-07-25 11:44:43.089560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.101 [2024-07-25 11:44:43.089687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.101 [2024-07-25 11:44:43.089725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.476 Running I/O for 1 seconds... 00:15:47.476 lcore 0: 184640 00:15:47.476 lcore 1: 184636 00:15:47.476 lcore 2: 184638 00:15:47.476 lcore 3: 184639 00:15:47.476 done. 00:15:47.476 00:15:47.476 real 0m1.830s 00:15:47.476 user 0m4.578s 00:15:47.476 sys 0m0.122s 00:15:47.476 ************************************ 00:15:47.476 END TEST event_perf 00:15:47.476 ************************************ 00:15:47.476 11:44:44 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.476 11:44:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:15:47.734 11:44:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:47.734 11:44:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:47.734 11:44:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.734 11:44:44 event -- common/autotest_common.sh@10 -- # set +x 00:15:47.734 ************************************ 00:15:47.734 START TEST event_reactor 00:15:47.734 ************************************ 00:15:47.734 11:44:44 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:47.734 [2024-07-25 11:44:44.580524] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:47.734 [2024-07-25 11:44:44.580885] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63371 ] 00:15:47.734 [2024-07-25 11:44:44.744113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.993 [2024-07-25 11:44:44.932133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.368 test_start 00:15:49.368 oneshot 00:15:49.368 tick 100 00:15:49.368 tick 100 00:15:49.368 tick 250 00:15:49.368 tick 100 00:15:49.368 tick 100 00:15:49.368 tick 250 00:15:49.368 tick 100 00:15:49.368 tick 500 00:15:49.368 tick 100 00:15:49.368 tick 100 00:15:49.368 tick 250 00:15:49.368 tick 100 00:15:49.368 tick 100 00:15:49.368 test_end 00:15:49.368 ************************************ 00:15:49.368 END TEST event_reactor 00:15:49.368 ************************************ 00:15:49.368 00:15:49.368 real 0m1.793s 00:15:49.368 user 0m1.597s 00:15:49.368 sys 0m0.084s 00:15:49.368 11:44:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.368 11:44:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:15:49.368 11:44:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:49.368 11:44:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:49.368 11:44:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.368 11:44:46 event -- common/autotest_common.sh@10 -- # set +x 00:15:49.368 ************************************ 00:15:49.368 START TEST event_reactor_perf 00:15:49.368 ************************************ 00:15:49.368 11:44:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:49.626 [2024-07-25 11:44:46.409322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:49.626 [2024-07-25 11:44:46.409470] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63413 ] 00:15:49.626 [2024-07-25 11:44:46.572636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.884 [2024-07-25 11:44:46.760545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.262 test_start 00:15:51.262 test_end 00:15:51.262 Performance: 274234 events per second 00:15:51.262 00:15:51.262 real 0m1.770s 00:15:51.262 user 0m1.584s 00:15:51.262 sys 0m0.075s 00:15:51.262 11:44:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.262 ************************************ 00:15:51.262 END TEST event_reactor_perf 00:15:51.262 ************************************ 00:15:51.262 11:44:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:15:51.262 11:44:48 event -- event/event.sh@49 -- # uname -s 00:15:51.262 11:44:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:51.262 11:44:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:51.262 11:44:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:51.262 11:44:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.262 11:44:48 event -- common/autotest_common.sh@10 -- # set +x 00:15:51.262 ************************************ 00:15:51.262 START TEST event_scheduler 00:15:51.262 ************************************ 00:15:51.262 11:44:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:51.262 * Looking for test storage... 00:15:51.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:51.262 11:44:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:51.262 11:44:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63475 00:15:51.262 11:44:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:51.262 11:44:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:51.262 11:44:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63475 00:15:51.262 11:44:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63475 ']' 00:15:51.262 11:44:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.262 11:44:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.262 11:44:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.262 11:44:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.262 11:44:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:51.522 [2024-07-25 11:44:48.378897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:51.522 [2024-07-25 11:44:48.379108] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63475 ] 00:15:51.780 [2024-07-25 11:44:48.558768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.780 [2024-07-25 11:44:48.754857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.780 [2024-07-25 11:44:48.754947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.780 [2024-07-25 11:44:48.755027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.780 [2024-07-25 11:44:48.755259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.348 11:44:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.348 11:44:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:15:52.348 11:44:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:52.348 11:44:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.348 11:44:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:52.348 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:52.348 POWER: Cannot set governor of lcore 0 to userspace 00:15:52.348 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:52.348 POWER: Cannot set governor of lcore 0 to performance 00:15:52.348 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:52.348 POWER: Cannot set governor of lcore 0 to userspace 00:15:52.348 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:52.348 POWER: Cannot set governor of lcore 0 to userspace 00:15:52.348 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:15:52.348 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:52.348 POWER: Unable to set Power Management Environment for lcore 0 00:15:52.348 [2024-07-25 11:44:49.333369] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:15:52.348 [2024-07-25 11:44:49.333392] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:15:52.348 [2024-07-25 11:44:49.333408] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:15:52.348 [2024-07-25 11:44:49.333432] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:15:52.348 [2024-07-25 11:44:49.333447] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:15:52.348 [2024-07-25 11:44:49.333458] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:15:52.348 11:44:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.348 11:44:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:52.348 11:44:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.348 11:44:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:52.607 [2024-07-25 11:44:49.611996] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:52.607 11:44:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.607 11:44:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:52.607 11:44:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:52.607 11:44:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.607 11:44:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:52.607 ************************************ 00:15:52.607 START TEST scheduler_create_thread 00:15:52.607 ************************************ 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.607 2 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.607 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 3 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 4 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 5 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 6 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 7 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 8 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 9 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 10 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 ************************************ 00:15:52.866 END TEST scheduler_create_thread 00:15:52.866 ************************************ 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.866 00:15:52.866 real 0m0.110s 00:15:52.866 user 0m0.010s 00:15:52.866 sys 0m0.007s 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.866 11:44:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:52.867 11:44:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:52.867 11:44:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63475 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63475 ']' 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63475 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63475 00:15:52.867 killing process with pid 63475 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63475' 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63475 00:15:52.867 11:44:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63475 00:15:53.434 [2024-07-25 11:44:50.218300] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:54.369 ************************************ 00:15:54.369 END TEST event_scheduler 00:15:54.369 ************************************ 00:15:54.369 00:15:54.369 real 0m3.172s 00:15:54.369 user 0m5.063s 00:15:54.369 sys 0m0.402s 00:15:54.369 11:44:51 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.369 11:44:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:54.627 11:44:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:15:54.627 11:44:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:54.627 11:44:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:54.627 11:44:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.627 11:44:51 event -- common/autotest_common.sh@10 -- # set +x 00:15:54.627 ************************************ 00:15:54.627 START TEST app_repeat 00:15:54.627 ************************************ 00:15:54.627 11:44:51 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:15:54.627 Process app_repeat pid: 63565 00:15:54.627 spdk_app_start Round 0 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63565 00:15:54.627 11:44:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:54.628 11:44:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63565' 00:15:54.628 11:44:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:54.628 11:44:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:54.628 11:44:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63565 /var/tmp/spdk-nbd.sock 00:15:54.628 11:44:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:54.628 11:44:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63565 ']' 00:15:54.628 11:44:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:54.628 11:44:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:54.628 11:44:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:54.628 11:44:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.628 11:44:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:54.628 [2024-07-25 11:44:51.486831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:54.628 [2024-07-25 11:44:51.487004] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63565 ] 00:15:54.887 [2024-07-25 11:44:51.662400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.887 [2024-07-25 11:44:51.895829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.887 [2024-07-25 11:44:51.895831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.854 11:44:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.854 11:44:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:15:55.854 11:44:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:56.114 Malloc0 00:15:56.114 11:44:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:56.373 Malloc1 00:15:56.373 11:44:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:56.373 11:44:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:56.938 /dev/nbd0 00:15:56.938 11:44:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.938 11:44:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:56.938 1+0 records in 00:15:56.938 1+0 records out 00:15:56.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359056 s, 11.4 MB/s 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:56.938 11:44:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:15:56.938 11:44:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.938 11:44:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:56.939 11:44:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:57.197 /dev/nbd1 00:15:57.197 11:44:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:57.197 11:44:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:57.197 1+0 records in 00:15:57.197 1+0 records out 00:15:57.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443702 s, 9.2 MB/s 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:57.197 11:44:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:15:57.197 11:44:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.197 11:44:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:57.197 11:44:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:57.197 11:44:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.197 11:44:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:57.455 { 00:15:57.455 "nbd_device": "/dev/nbd0", 00:15:57.455 "bdev_name": "Malloc0" 00:15:57.455 }, 00:15:57.455 { 00:15:57.455 "nbd_device": "/dev/nbd1", 00:15:57.455 "bdev_name": "Malloc1" 00:15:57.455 } 00:15:57.455 ]' 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:57.455 { 00:15:57.455 "nbd_device": "/dev/nbd0", 00:15:57.455 "bdev_name": "Malloc0" 00:15:57.455 }, 00:15:57.455 { 00:15:57.455 "nbd_device": "/dev/nbd1", 00:15:57.455 "bdev_name": "Malloc1" 00:15:57.455 } 00:15:57.455 ]' 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:57.455 /dev/nbd1' 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:57.455 /dev/nbd1' 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:57.455 11:44:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:57.456 256+0 records in 00:15:57.456 256+0 records out 00:15:57.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00750647 s, 140 MB/s 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:57.456 11:44:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:57.714 256+0 records in 00:15:57.714 256+0 records out 00:15:57.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311709 s, 33.6 MB/s 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:57.714 256+0 records in 00:15:57.714 256+0 records out 00:15:57.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0608353 s, 17.2 MB/s 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.714 11:44:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.972 11:44:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.539 11:44:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:58.798 11:44:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:58.798 11:44:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:59.369 11:44:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:00.745 [2024-07-25 11:44:57.391276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:00.745 [2024-07-25 11:44:57.573488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.745 [2024-07-25 11:44:57.573509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.745 [2024-07-25 11:44:57.742229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:00.745 [2024-07-25 11:44:57.742322] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:02.702 spdk_app_start Round 1 00:16:02.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:02.702 11:44:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:02.702 11:44:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:02.702 11:44:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63565 /var/tmp/spdk-nbd.sock 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63565 ']' 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.702 11:44:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:16:02.702 11:44:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:02.960 Malloc0 00:16:02.960 11:44:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:03.217 Malloc1 00:16:03.217 11:45:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.217 11:45:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:03.474 /dev/nbd0 00:16:03.474 11:45:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.474 11:45:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.474 11:45:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.475 11:45:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:03.475 1+0 records in 00:16:03.475 1+0 records out 00:16:03.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490089 s, 8.4 MB/s 00:16:03.475 11:45:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:03.475 11:45:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:03.475 11:45:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:03.475 11:45:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.475 11:45:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:03.475 11:45:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.475 11:45:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.475 11:45:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:04.041 /dev/nbd1 00:16:04.041 11:45:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:04.041 11:45:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:04.041 1+0 records in 00:16:04.041 1+0 records out 00:16:04.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297103 s, 13.8 MB/s 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:04.041 11:45:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:04.041 11:45:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.041 11:45:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:04.041 11:45:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:04.041 11:45:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.041 11:45:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:04.299 { 00:16:04.299 "nbd_device": "/dev/nbd0", 00:16:04.299 "bdev_name": "Malloc0" 00:16:04.299 }, 00:16:04.299 { 00:16:04.299 "nbd_device": "/dev/nbd1", 00:16:04.299 "bdev_name": "Malloc1" 00:16:04.299 } 00:16:04.299 ]' 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:04.299 { 00:16:04.299 "nbd_device": "/dev/nbd0", 00:16:04.299 "bdev_name": "Malloc0" 00:16:04.299 }, 00:16:04.299 { 00:16:04.299 "nbd_device": "/dev/nbd1", 00:16:04.299 "bdev_name": "Malloc1" 00:16:04.299 } 00:16:04.299 ]' 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:04.299 /dev/nbd1' 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:04.299 /dev/nbd1' 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:04.299 256+0 records in 00:16:04.299 256+0 records out 00:16:04.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532445 s, 197 MB/s 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:04.299 256+0 records in 00:16:04.299 256+0 records out 00:16:04.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0365116 s, 28.7 MB/s 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.299 11:45:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:04.558 256+0 records in 00:16:04.558 256+0 records out 00:16:04.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362203 s, 28.9 MB/s 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.558 11:45:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.817 11:45:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:05.074 11:45:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:05.332 11:45:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:05.332 11:45:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:05.897 11:45:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:07.285 [2024-07-25 11:45:03.971822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.285 [2024-07-25 11:45:04.154917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.285 [2024-07-25 11:45:04.154923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.543 [2024-07-25 11:45:04.320996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:07.543 [2024-07-25 11:45:04.321107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:08.918 11:45:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:08.918 spdk_app_start Round 2 00:16:08.918 11:45:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:08.918 11:45:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63565 /var/tmp/spdk-nbd.sock 00:16:08.918 11:45:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63565 ']' 00:16:08.918 11:45:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:08.918 11:45:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:08.919 11:45:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:08.919 11:45:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.919 11:45:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:09.177 11:45:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.177 11:45:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:16:09.177 11:45:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:09.435 Malloc0 00:16:09.435 11:45:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:09.693 Malloc1 00:16:09.693 11:45:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:09.693 11:45:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.693 11:45:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:09.693 11:45:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:09.693 11:45:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.693 11:45:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.951 11:45:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:10.210 /dev/nbd0 00:16:10.210 11:45:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:10.210 11:45:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:10.210 1+0 records in 00:16:10.210 1+0 records out 00:16:10.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032095 s, 12.8 MB/s 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:10.210 11:45:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:10.210 11:45:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.210 11:45:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:10.210 11:45:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:10.469 /dev/nbd1 00:16:10.469 11:45:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:10.469 11:45:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:10.469 1+0 records in 00:16:10.469 1+0 records out 00:16:10.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372619 s, 11.0 MB/s 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:10.469 11:45:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:16:10.469 11:45:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.469 11:45:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:10.469 11:45:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:10.469 11:45:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.469 11:45:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:10.727 11:45:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:10.727 { 00:16:10.727 "nbd_device": "/dev/nbd0", 00:16:10.727 "bdev_name": "Malloc0" 00:16:10.727 }, 00:16:10.727 { 00:16:10.727 "nbd_device": "/dev/nbd1", 00:16:10.727 "bdev_name": "Malloc1" 00:16:10.727 } 00:16:10.727 ]' 00:16:10.727 11:45:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:10.727 { 00:16:10.727 "nbd_device": "/dev/nbd0", 00:16:10.727 "bdev_name": "Malloc0" 00:16:10.727 }, 00:16:10.727 { 00:16:10.727 "nbd_device": "/dev/nbd1", 00:16:10.727 "bdev_name": "Malloc1" 00:16:10.727 } 00:16:10.727 ]' 00:16:10.727 11:45:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:10.984 /dev/nbd1' 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:10.984 /dev/nbd1' 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:10.984 11:45:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:10.984 256+0 records in 00:16:10.984 256+0 records out 00:16:10.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00913459 s, 115 MB/s 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:10.985 256+0 records in 00:16:10.985 256+0 records out 00:16:10.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0400171 s, 26.2 MB/s 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:10.985 256+0 records in 00:16:10.985 256+0 records out 00:16:10.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0460964 s, 22.7 MB/s 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.985 11:45:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.276 11:45:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:11.534 11:45:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:11.792 11:45:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:16:11.792 11:45:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:12.358 11:45:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:16:13.738 [2024-07-25 11:45:10.370537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.738 [2024-07-25 11:45:10.550350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.738 [2024-07-25 11:45:10.550358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.738 [2024-07-25 11:45:10.718064] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:13.738 [2024-07-25 11:45:10.718141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:15.659 11:45:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63565 /var/tmp/spdk-nbd.sock 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63565 ']' 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:15.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:16:15.659 11:45:12 event.app_repeat -- event/event.sh@39 -- # killprocess 63565 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63565 ']' 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63565 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63565 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.659 killing process with pid 63565 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63565' 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63565 00:16:15.659 11:45:12 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63565 00:16:16.591 spdk_app_start is called in Round 0. 00:16:16.591 Shutdown signal received, stop current app iteration 00:16:16.591 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:16:16.591 spdk_app_start is called in Round 1. 00:16:16.591 Shutdown signal received, stop current app iteration 00:16:16.591 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:16:16.591 spdk_app_start is called in Round 2. 00:16:16.591 Shutdown signal received, stop current app iteration 00:16:16.591 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:16:16.591 spdk_app_start is called in Round 3. 00:16:16.591 Shutdown signal received, stop current app iteration 00:16:16.591 11:45:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:16.591 11:45:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:16:16.591 00:16:16.591 real 0m22.195s 00:16:16.591 user 0m48.588s 00:16:16.591 sys 0m3.038s 00:16:16.592 11:45:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.592 11:45:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:16.592 ************************************ 00:16:16.592 END TEST app_repeat 00:16:16.592 ************************************ 00:16:16.848 11:45:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:16.848 11:45:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:16.848 11:45:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:16.848 11:45:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.848 11:45:13 event -- common/autotest_common.sh@10 -- # set +x 00:16:16.848 ************************************ 00:16:16.848 START TEST cpu_locks 00:16:16.848 ************************************ 00:16:16.848 11:45:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:16.848 * Looking for test storage... 00:16:16.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:16.848 11:45:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:16.848 11:45:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:16.848 11:45:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:16.848 11:45:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:16.848 11:45:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:16.848 11:45:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.848 11:45:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:16.848 ************************************ 00:16:16.848 START TEST default_locks 00:16:16.848 ************************************ 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64030 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64030 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64030 ']' 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.848 11:45:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:16.848 [2024-07-25 11:45:13.841024] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:16.848 [2024-07-25 11:45:13.841184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64030 ] 00:16:17.106 [2024-07-25 11:45:14.007345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.363 [2024-07-25 11:45:14.209768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.928 11:45:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.928 11:45:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:16:17.928 11:45:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64030 00:16:17.928 11:45:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64030 00:16:17.928 11:45:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64030 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 64030 ']' 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 64030 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64030 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.494 killing process with pid 64030 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64030' 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 64030 00:16:18.494 11:45:15 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 64030 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64030 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64030 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 64030 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64030 ']' 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 ERROR: process (pid: 64030) is no longer running 00:16:20.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64030) - No such process 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:20.396 00:16:20.396 real 0m3.640s 00:16:20.396 user 0m3.718s 00:16:20.396 sys 0m0.519s 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.396 ************************************ 00:16:20.396 END TEST default_locks 00:16:20.396 ************************************ 00:16:20.396 11:45:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 11:45:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:20.396 11:45:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:20.396 11:45:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.396 11:45:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 ************************************ 00:16:20.396 START TEST default_locks_via_rpc 00:16:20.396 ************************************ 00:16:20.396 11:45:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64100 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64100 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64100 ']' 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.654 11:45:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.654 [2024-07-25 11:45:17.545625] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:20.654 [2024-07-25 11:45:17.545813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64100 ] 00:16:20.916 [2024-07-25 11:45:17.713952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.916 [2024-07-25 11:45:17.902520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64100 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64100 00:16:21.853 11:45:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64100 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 64100 ']' 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 64100 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64100 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.111 killing process with pid 64100 00:16:22.111 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64100' 00:16:22.112 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 64100 00:16:22.112 11:45:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 64100 00:16:24.640 00:16:24.640 real 0m3.748s 00:16:24.640 user 0m3.874s 00:16:24.640 sys 0m0.601s 00:16:24.640 11:45:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.640 ************************************ 00:16:24.640 END TEST default_locks_via_rpc 00:16:24.640 ************************************ 00:16:24.640 11:45:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.640 11:45:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:16:24.640 11:45:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:24.640 11:45:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.640 11:45:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:24.640 ************************************ 00:16:24.640 START TEST non_locking_app_on_locked_coremask 00:16:24.640 ************************************ 00:16:24.640 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:16:24.640 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64175 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64175 /var/tmp/spdk.sock 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64175 ']' 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.641 11:45:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:24.641 [2024-07-25 11:45:21.324177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:24.641 [2024-07-25 11:45:21.324332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64175 ] 00:16:24.641 [2024-07-25 11:45:21.484121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.641 [2024-07-25 11:45:21.669884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64191 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64191 /var/tmp/spdk2.sock 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64191 ']' 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.607 11:45:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:25.607 [2024-07-25 11:45:22.519026] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:25.607 [2024-07-25 11:45:22.519186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64191 ] 00:16:25.867 [2024-07-25 11:45:22.689346] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:25.867 [2024-07-25 11:45:22.689411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.167 [2024-07-25 11:45:23.069474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.541 11:45:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.541 11:45:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:27.541 11:45:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64175 00:16:27.541 11:45:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64175 00:16:27.541 11:45:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64175 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64175 ']' 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64175 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64175 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.474 killing process with pid 64175 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64175' 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64175 00:16:28.474 11:45:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64175 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64191 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64191 ']' 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64191 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64191 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.657 killing process with pid 64191 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64191' 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64191 00:16:32.657 11:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64191 00:16:34.556 00:16:34.556 real 0m10.301s 00:16:34.556 user 0m10.815s 00:16:34.556 sys 0m1.119s 00:16:34.556 11:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.556 11:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:34.556 ************************************ 00:16:34.556 END TEST non_locking_app_on_locked_coremask 00:16:34.556 ************************************ 00:16:34.556 11:45:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:34.556 11:45:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:34.556 11:45:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.556 11:45:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:34.556 ************************************ 00:16:34.556 START TEST locking_app_on_unlocked_coremask 00:16:34.556 ************************************ 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64326 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64326 /var/tmp/spdk.sock 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64326 ']' 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.556 11:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:34.814 [2024-07-25 11:45:31.689030] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:34.814 [2024-07-25 11:45:31.689200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64326 ] 00:16:35.072 [2024-07-25 11:45:31.861835] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:35.072 [2024-07-25 11:45:31.861909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.072 [2024-07-25 11:45:32.086592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64342 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64342 /var/tmp/spdk2.sock 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64342 ']' 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:36.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.008 11:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:36.008 [2024-07-25 11:45:32.906924] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:36.008 [2024-07-25 11:45:32.907114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64342 ] 00:16:36.267 [2024-07-25 11:45:33.087475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.525 [2024-07-25 11:45:33.483185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.055 11:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.055 11:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:39.055 11:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64342 00:16:39.055 11:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64342 00:16:39.055 11:45:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64326 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64326 ']' 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64326 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64326 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:39.623 killing process with pid 64326 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64326' 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64326 00:16:39.623 11:45:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64326 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64342 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64342 ']' 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64342 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64342 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.812 killing process with pid 64342 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64342' 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64342 00:16:43.812 11:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64342 00:16:45.759 00:16:45.759 real 0m11.098s 00:16:45.759 user 0m11.872s 00:16:45.759 sys 0m1.172s 00:16:45.759 ************************************ 00:16:45.759 END TEST locking_app_on_unlocked_coremask 00:16:45.759 ************************************ 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:45.759 11:45:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:16:45.759 11:45:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:45.759 11:45:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.759 11:45:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:45.759 ************************************ 00:16:45.759 START TEST locking_app_on_locked_coremask 00:16:45.759 ************************************ 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64485 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64485 /var/tmp/spdk.sock 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64485 ']' 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.759 11:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:46.018 [2024-07-25 11:45:42.825677] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:46.018 [2024-07-25 11:45:42.825854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64485 ] 00:16:46.018 [2024-07-25 11:45:42.991034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.276 [2024-07-25 11:45:43.215522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64501 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64501 /var/tmp/spdk2.sock 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64501 /var/tmp/spdk2.sock 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64501 /var/tmp/spdk2.sock 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64501 ']' 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.212 11:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:47.212 [2024-07-25 11:45:44.039637] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:47.212 [2024-07-25 11:45:44.039801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64501 ] 00:16:47.213 [2024-07-25 11:45:44.213312] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64485 has claimed it. 00:16:47.213 [2024-07-25 11:45:44.213390] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:47.779 ERROR: process (pid: 64501) is no longer running 00:16:47.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64501) - No such process 00:16:47.779 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.779 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:16:47.779 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:16:47.779 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:47.779 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:47.779 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:47.780 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64485 00:16:47.780 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64485 00:16:47.780 11:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64485 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64485 ']' 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64485 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64485 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.346 killing process with pid 64485 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64485' 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64485 00:16:48.346 11:45:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64485 00:16:50.250 00:16:50.250 real 0m4.556s 00:16:50.250 user 0m4.994s 00:16:50.250 sys 0m0.737s 00:16:50.250 ************************************ 00:16:50.250 END TEST locking_app_on_locked_coremask 00:16:50.250 ************************************ 00:16:50.250 11:45:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.250 11:45:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:50.508 11:45:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:50.508 11:45:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:50.508 11:45:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.508 11:45:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:50.508 ************************************ 00:16:50.508 START TEST locking_overlapped_coremask 00:16:50.508 ************************************ 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64569 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64569 /var/tmp/spdk.sock 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64569 ']' 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.508 11:45:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:50.509 11:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.509 11:45:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:50.509 [2024-07-25 11:45:47.449322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:50.509 [2024-07-25 11:45:47.449493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64569 ] 00:16:50.768 [2024-07-25 11:45:47.618850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:51.026 [2024-07-25 11:45:47.807120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.026 [2024-07-25 11:45:47.807246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.026 [2024-07-25 11:45:47.807257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64594 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64594 /var/tmp/spdk2.sock 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64594 /var/tmp/spdk2.sock 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64594 /var/tmp/spdk2.sock 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64594 ']' 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:51.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.592 11:45:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:51.851 [2024-07-25 11:45:48.634057] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:51.851 [2024-07-25 11:45:48.634904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64594 ] 00:16:51.851 [2024-07-25 11:45:48.822436] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64569 has claimed it. 00:16:51.851 [2024-07-25 11:45:48.822523] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:52.417 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64594) - No such process 00:16:52.417 ERROR: process (pid: 64594) is no longer running 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64569 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64569 ']' 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64569 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64569 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.417 killing process with pid 64569 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.417 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64569' 00:16:52.418 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64569 00:16:52.418 11:45:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64569 00:16:54.947 00:16:54.947 real 0m4.061s 00:16:54.947 user 0m10.666s 00:16:54.947 sys 0m0.528s 00:16:54.947 11:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.947 11:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:54.947 ************************************ 00:16:54.947 END TEST locking_overlapped_coremask 00:16:54.947 ************************************ 00:16:54.947 11:45:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:54.947 11:45:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:54.947 11:45:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.947 11:45:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:54.947 ************************************ 00:16:54.947 START TEST locking_overlapped_coremask_via_rpc 00:16:54.948 ************************************ 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64647 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64647 /var/tmp/spdk.sock 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64647 ']' 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.948 11:45:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.948 [2024-07-25 11:45:51.569539] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:54.948 [2024-07-25 11:45:51.569706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64647 ] 00:16:54.948 [2024-07-25 11:45:51.737999] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:54.948 [2024-07-25 11:45:51.738254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.948 [2024-07-25 11:45:51.968759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.948 [2024-07-25 11:45:51.968801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.948 [2024-07-25 11:45:51.968820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64676 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64676 /var/tmp/spdk2.sock 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64676 ']' 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:55.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.882 11:45:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.882 [2024-07-25 11:45:52.801137] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:55.882 [2024-07-25 11:45:52.801856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64676 ] 00:16:56.141 [2024-07-25 11:45:52.985933] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:56.141 [2024-07-25 11:45:52.986001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.399 [2024-07-25 11:45:53.370147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.399 [2024-07-25 11:45:53.370208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.399 [2024-07-25 11:45:53.370222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.770 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.770 [2024-07-25 11:45:54.801911] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64647 has claimed it. 00:16:58.028 request: 00:16:58.028 { 00:16:58.028 "method": "framework_enable_cpumask_locks", 00:16:58.028 "req_id": 1 00:16:58.028 } 00:16:58.028 Got JSON-RPC error response 00:16:58.028 response: 00:16:58.028 { 00:16:58.028 "code": -32603, 00:16:58.028 "message": "Failed to claim CPU core: 2" 00:16:58.028 } 00:16:58.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.028 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:58.028 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:58.028 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.028 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64647 /var/tmp/spdk.sock 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64647 ']' 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.029 11:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64676 /var/tmp/spdk2.sock 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64676 ']' 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:58.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.286 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:58.545 00:16:58.545 real 0m3.909s 00:16:58.545 user 0m1.416s 00:16:58.545 sys 0m0.174s 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.545 11:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.545 ************************************ 00:16:58.545 END TEST locking_overlapped_coremask_via_rpc 00:16:58.545 ************************************ 00:16:58.545 11:45:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:16:58.545 11:45:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64647 ]] 00:16:58.545 11:45:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64647 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64647 ']' 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64647 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64647 00:16:58.545 killing process with pid 64647 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64647' 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64647 00:16:58.545 11:45:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64647 00:17:01.072 11:45:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64676 ]] 00:17:01.072 11:45:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64676 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64676 ']' 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64676 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64676 00:17:01.072 killing process with pid 64676 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64676' 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64676 00:17:01.072 11:45:57 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64676 00:17:02.973 11:45:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:02.973 Process with pid 64647 is not found 00:17:02.973 11:45:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:17:02.973 11:45:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64647 ]] 00:17:02.973 11:45:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64647 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64647 ']' 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64647 00:17:02.973 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64647) - No such process 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64647 is not found' 00:17:02.973 11:45:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64676 ]] 00:17:02.973 11:45:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64676 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64676 ']' 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64676 00:17:02.973 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64676) - No such process 00:17:02.973 Process with pid 64676 is not found 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64676 is not found' 00:17:02.973 11:45:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:02.973 ************************************ 00:17:02.973 END TEST cpu_locks 00:17:02.973 ************************************ 00:17:02.973 00:17:02.973 real 0m46.005s 00:17:02.973 user 1m17.381s 00:17:02.973 sys 0m5.714s 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.973 11:45:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:02.973 00:17:02.973 real 1m17.127s 00:17:02.973 user 2m18.913s 00:17:02.973 sys 0m9.641s 00:17:02.973 11:45:59 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.973 ************************************ 00:17:02.974 END TEST event 00:17:02.974 ************************************ 00:17:02.974 11:45:59 event -- common/autotest_common.sh@10 -- # set +x 00:17:02.974 11:45:59 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:02.974 11:45:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:02.974 11:45:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.974 11:45:59 -- common/autotest_common.sh@10 -- # set +x 00:17:02.974 ************************************ 00:17:02.974 START TEST thread 00:17:02.974 ************************************ 00:17:02.974 11:45:59 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:02.974 * Looking for test storage... 00:17:02.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:02.974 11:45:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:02.974 11:45:59 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:17:02.974 11:45:59 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.974 11:45:59 thread -- common/autotest_common.sh@10 -- # set +x 00:17:02.974 ************************************ 00:17:02.974 START TEST thread_poller_perf 00:17:02.974 ************************************ 00:17:02.974 11:45:59 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:02.974 [2024-07-25 11:45:59.880944] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:02.974 [2024-07-25 11:45:59.881250] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64845 ] 00:17:03.232 [2024-07-25 11:46:00.047426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.232 [2024-07-25 11:46:00.261938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.232 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:05.136 ====================================== 00:17:05.136 busy:2217650074 (cyc) 00:17:05.136 total_run_count: 281000 00:17:05.136 tsc_hz: 2200000000 (cyc) 00:17:05.136 ====================================== 00:17:05.136 poller_cost: 7891 (cyc), 3586 (nsec) 00:17:05.136 00:17:05.136 real 0m1.824s 00:17:05.136 user 0m1.612s 00:17:05.136 sys 0m0.100s 00:17:05.136 11:46:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.136 11:46:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.136 ************************************ 00:17:05.136 END TEST thread_poller_perf 00:17:05.136 ************************************ 00:17:05.136 11:46:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:05.136 11:46:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:17:05.136 11:46:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.136 11:46:01 thread -- common/autotest_common.sh@10 -- # set +x 00:17:05.136 ************************************ 00:17:05.136 START TEST thread_poller_perf 00:17:05.136 ************************************ 00:17:05.136 11:46:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:05.136 [2024-07-25 11:46:01.752062] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:05.136 [2024-07-25 11:46:01.752242] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64887 ] 00:17:05.136 [2024-07-25 11:46:01.923302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.136 [2024-07-25 11:46:02.118070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.136 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:06.511 ====================================== 00:17:06.511 busy:2203995522 (cyc) 00:17:06.511 total_run_count: 3656000 00:17:06.511 tsc_hz: 2200000000 (cyc) 00:17:06.511 ====================================== 00:17:06.511 poller_cost: 602 (cyc), 273 (nsec) 00:17:06.511 ************************************ 00:17:06.511 END TEST thread_poller_perf 00:17:06.511 ************************************ 00:17:06.511 00:17:06.511 real 0m1.798s 00:17:06.511 user 0m1.579s 00:17:06.511 sys 0m0.109s 00:17:06.511 11:46:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.511 11:46:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:17:06.770 11:46:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:06.770 00:17:06.770 real 0m3.792s 00:17:06.770 user 0m3.256s 00:17:06.770 sys 0m0.305s 00:17:06.770 ************************************ 00:17:06.770 END TEST thread 00:17:06.770 ************************************ 00:17:06.770 11:46:03 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.770 11:46:03 thread -- common/autotest_common.sh@10 -- # set +x 00:17:06.770 11:46:03 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:17:06.770 11:46:03 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:06.770 11:46:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:06.770 11:46:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.770 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:17:06.770 ************************************ 00:17:06.770 START TEST app_cmdline 00:17:06.770 ************************************ 00:17:06.770 11:46:03 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:06.770 * Looking for test storage... 00:17:06.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:06.770 11:46:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:06.770 11:46:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64962 00:17:06.770 11:46:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64962 00:17:06.770 11:46:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:06.770 11:46:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 64962 ']' 00:17:06.770 11:46:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.770 11:46:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.770 11:46:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.770 11:46:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.770 11:46:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:06.770 [2024-07-25 11:46:03.772249] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:06.770 [2024-07-25 11:46:03.772400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64962 ] 00:17:07.028 [2024-07-25 11:46:03.934068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.286 [2024-07-25 11:46:04.119518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.853 11:46:04 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.853 11:46:04 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:17:07.853 11:46:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:08.111 { 00:17:08.111 "version": "SPDK v24.09-pre git sha1 704257090", 00:17:08.111 "fields": { 00:17:08.111 "major": 24, 00:17:08.111 "minor": 9, 00:17:08.111 "patch": 0, 00:17:08.111 "suffix": "-pre", 00:17:08.111 "commit": "704257090" 00:17:08.111 } 00:17:08.111 } 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:08.111 11:46:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:08.111 11:46:05 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:08.369 request: 00:17:08.369 { 00:17:08.369 "method": "env_dpdk_get_mem_stats", 00:17:08.369 "req_id": 1 00:17:08.369 } 00:17:08.369 Got JSON-RPC error response 00:17:08.369 response: 00:17:08.369 { 00:17:08.369 "code": -32601, 00:17:08.369 "message": "Method not found" 00:17:08.369 } 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.369 11:46:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64962 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 64962 ']' 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 64962 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:17:08.369 11:46:05 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.627 11:46:05 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64962 00:17:08.627 11:46:05 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.627 killing process with pid 64962 00:17:08.627 11:46:05 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.627 11:46:05 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64962' 00:17:08.627 11:46:05 app_cmdline -- common/autotest_common.sh@969 -- # kill 64962 00:17:08.627 11:46:05 app_cmdline -- common/autotest_common.sh@974 -- # wait 64962 00:17:10.528 00:17:10.528 real 0m3.922s 00:17:10.528 user 0m4.467s 00:17:10.528 sys 0m0.462s 00:17:10.528 11:46:07 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.528 11:46:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:10.528 ************************************ 00:17:10.528 END TEST app_cmdline 00:17:10.528 ************************************ 00:17:10.528 11:46:07 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:10.528 11:46:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:10.528 11:46:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.528 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:10.787 ************************************ 00:17:10.787 START TEST version 00:17:10.787 ************************************ 00:17:10.787 11:46:07 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:10.787 * Looking for test storage... 00:17:10.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:10.787 11:46:07 version -- app/version.sh@17 -- # get_header_version major 00:17:10.787 11:46:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # cut -f2 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # tr -d '"' 00:17:10.787 11:46:07 version -- app/version.sh@17 -- # major=24 00:17:10.787 11:46:07 version -- app/version.sh@18 -- # get_header_version minor 00:17:10.787 11:46:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # tr -d '"' 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # cut -f2 00:17:10.787 11:46:07 version -- app/version.sh@18 -- # minor=9 00:17:10.787 11:46:07 version -- app/version.sh@19 -- # get_header_version patch 00:17:10.787 11:46:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # cut -f2 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # tr -d '"' 00:17:10.787 11:46:07 version -- app/version.sh@19 -- # patch=0 00:17:10.787 11:46:07 version -- app/version.sh@20 -- # get_header_version suffix 00:17:10.787 11:46:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # cut -f2 00:17:10.787 11:46:07 version -- app/version.sh@14 -- # tr -d '"' 00:17:10.787 11:46:07 version -- app/version.sh@20 -- # suffix=-pre 00:17:10.787 11:46:07 version -- app/version.sh@22 -- # version=24.9 00:17:10.787 11:46:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:17:10.787 11:46:07 version -- app/version.sh@28 -- # version=24.9rc0 00:17:10.787 11:46:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:10.787 11:46:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:10.787 11:46:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:17:10.787 11:46:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:17:10.787 00:17:10.787 real 0m0.134s 00:17:10.787 user 0m0.083s 00:17:10.787 sys 0m0.079s 00:17:10.787 11:46:07 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.787 ************************************ 00:17:10.787 END TEST version 00:17:10.787 ************************************ 00:17:10.787 11:46:07 version -- common/autotest_common.sh@10 -- # set +x 00:17:10.787 11:46:07 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:17:10.787 11:46:07 -- spdk/autotest.sh@202 -- # uname -s 00:17:10.787 11:46:07 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:17:10.787 11:46:07 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:17:10.787 11:46:07 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:17:10.787 11:46:07 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:17:10.787 11:46:07 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:10.787 11:46:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:10.787 11:46:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.787 11:46:07 -- common/autotest_common.sh@10 -- # set +x 00:17:10.787 ************************************ 00:17:10.787 START TEST blockdev_nvme 00:17:10.787 ************************************ 00:17:10.787 11:46:07 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:10.787 * Looking for test storage... 00:17:11.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:11.045 11:46:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:11.045 11:46:07 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65135 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 65135 00:17:11.046 11:46:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:11.046 11:46:07 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 65135 ']' 00:17:11.046 11:46:07 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.046 11:46:07 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.046 11:46:07 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.046 11:46:07 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.046 11:46:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.046 [2024-07-25 11:46:07.949728] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:11.046 [2024-07-25 11:46:07.949904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65135 ] 00:17:11.304 [2024-07-25 11:46:08.120264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.562 [2024-07-25 11:46:08.390285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.128 11:46:09 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.128 11:46:09 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:17:12.128 11:46:09 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:12.128 11:46:09 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:17:12.128 11:46:09 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:17:12.128 11:46:09 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:17:12.129 11:46:09 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:12.129 11:46:09 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:17:12.129 11:46:09 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.129 11:46:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.431 11:46:09 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.431 11:46:09 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:17:12.431 11:46:09 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.431 11:46:09 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.431 11:46:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 11:46:09 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.690 11:46:09 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:12.690 11:46:09 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.690 11:46:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 11:46:09 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.690 11:46:09 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:12.690 11:46:09 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:12.690 11:46:09 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.690 11:46:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 11:46:09 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:12.690 11:46:09 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.690 11:46:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:12.690 11:46:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:12.691 11:46:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6d35faf1-abdd-4cf4-be5f-e8cf88d15e33"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6d35faf1-abdd-4cf4-be5f-e8cf88d15e33",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "feb606f0-813d-440d-8e06-5f528d23d720"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "feb606f0-813d-440d-8e06-5f528d23d720",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d805c846-3c36-466b-8502-c6cc97372bc3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d805c846-3c36-466b-8502-c6cc97372bc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0e1077d1-e965-45e1-b9a9-2e9be42b2473"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0e1077d1-e965-45e1-b9a9-2e9be42b2473",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3eab40e4-e1e7-4511-8fa3-3b0a58ef46d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3eab40e4-e1e7-4511-8fa3-3b0a58ef46d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e1afe6b0-e467-44bd-a391-755e1de720ad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e1afe6b0-e467-44bd-a391-755e1de720ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:17:12.691 11:46:09 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:12.691 11:46:09 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:17:12.691 11:46:09 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:12.691 11:46:09 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 65135 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 65135 ']' 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 65135 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65135 00:17:12.691 killing process with pid 65135 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65135' 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 65135 00:17:12.691 11:46:09 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 65135 00:17:15.218 11:46:11 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:15.218 11:46:11 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:15.218 11:46:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:15.218 11:46:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.218 11:46:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.218 ************************************ 00:17:15.218 START TEST bdev_hello_world 00:17:15.218 ************************************ 00:17:15.218 11:46:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:15.218 [2024-07-25 11:46:11.828560] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:15.218 [2024-07-25 11:46:11.828737] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65224 ] 00:17:15.218 [2024-07-25 11:46:11.990565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.218 [2024-07-25 11:46:12.175642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.784 [2024-07-25 11:46:12.780883] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:15.784 [2024-07-25 11:46:12.780940] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:17:15.784 [2024-07-25 11:46:12.780971] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:15.784 [2024-07-25 11:46:12.783940] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:15.784 [2024-07-25 11:46:12.784303] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:15.784 [2024-07-25 11:46:12.784337] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:15.784 [2024-07-25 11:46:12.784708] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:15.784 00:17:15.784 [2024-07-25 11:46:12.784752] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:17.174 00:17:17.174 real 0m2.146s 00:17:17.174 user 0m1.839s 00:17:17.174 sys 0m0.198s 00:17:17.174 ************************************ 00:17:17.174 END TEST bdev_hello_world 00:17:17.174 ************************************ 00:17:17.174 11:46:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.174 11:46:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:17.174 11:46:13 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:17.174 11:46:13 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:17.174 11:46:13 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.174 11:46:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.174 ************************************ 00:17:17.174 START TEST bdev_bounds 00:17:17.174 ************************************ 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=65272 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:17.174 Process bdevio pid: 65272 00:17:17.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 65272' 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 65272 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 65272 ']' 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.174 11:46:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:17.174 [2024-07-25 11:46:14.053262] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:17.174 [2024-07-25 11:46:14.053438] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65272 ] 00:17:17.432 [2024-07-25 11:46:14.222557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.432 [2024-07-25 11:46:14.410552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.432 [2024-07-25 11:46:14.410618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.432 [2024-07-25 11:46:14.410618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.372 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.372 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:18.372 11:46:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:18.372 I/O targets: 00:17:18.372 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:18.372 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:18.372 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:18.372 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:18.372 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:18.372 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:18.372 00:17:18.372 00:17:18.372 CUnit - A unit testing framework for C - Version 2.1-3 00:17:18.372 http://cunit.sourceforge.net/ 00:17:18.372 00:17:18.372 00:17:18.372 Suite: bdevio tests on: Nvme3n1 00:17:18.372 Test: blockdev write read block ...passed 00:17:18.372 Test: blockdev write zeroes read block ...passed 00:17:18.372 Test: blockdev write zeroes read no split ...passed 00:17:18.372 Test: blockdev write zeroes read split ...passed 00:17:18.372 Test: blockdev write zeroes read split partial ...passed 00:17:18.372 Test: blockdev reset ...[2024-07-25 11:46:15.264037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:17:18.372 [2024-07-25 11:46:15.267945] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.372 passed 00:17:18.372 Test: blockdev write read 8 blocks ...passed 00:17:18.372 Test: blockdev write read size > 128k ...passed 00:17:18.372 Test: blockdev write read invalid size ...passed 00:17:18.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.372 Test: blockdev write read max offset ...passed 00:17:18.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.372 Test: blockdev writev readv 8 blocks ...passed 00:17:18.372 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.372 Test: blockdev writev readv block ...passed 00:17:18.372 Test: blockdev writev readv size > 128k ...passed 00:17:18.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.372 Test: blockdev comparev and writev ...[2024-07-25 11:46:15.277839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27440a000 len:0x1000 00:17:18.372 [2024-07-25 11:46:15.277905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:18.372 passed 00:17:18.372 Test: blockdev nvme passthru rw ...passed 00:17:18.372 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:46:15.278912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:18.372 [2024-07-25 11:46:15.278958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:18.372 passed 00:17:18.372 Test: blockdev nvme admin passthru ...passed 00:17:18.372 Test: blockdev copy ...passed 00:17:18.372 Suite: bdevio tests on: Nvme2n3 00:17:18.372 Test: blockdev write read block ...passed 00:17:18.372 Test: blockdev write zeroes read block ...passed 00:17:18.372 Test: blockdev write zeroes read no split ...passed 00:17:18.372 Test: blockdev write zeroes read split ...passed 00:17:18.372 Test: blockdev write zeroes read split partial ...passed 00:17:18.372 Test: blockdev reset ...[2024-07-25 11:46:15.351105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:17:18.372 [2024-07-25 11:46:15.355433] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.372 passed 00:17:18.372 Test: blockdev write read 8 blocks ...passed 00:17:18.372 Test: blockdev write read size > 128k ...passed 00:17:18.372 Test: blockdev write read invalid size ...passed 00:17:18.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.372 Test: blockdev write read max offset ...passed 00:17:18.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.372 Test: blockdev writev readv 8 blocks ...passed 00:17:18.372 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.372 Test: blockdev writev readv block ...passed 00:17:18.372 Test: blockdev writev readv size > 128k ...passed 00:17:18.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.372 Test: blockdev comparev and writev ...[2024-07-25 11:46:15.363503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x256a04000 len:0x1000 00:17:18.372 [2024-07-25 11:46:15.363566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:18.372 passed 00:17:18.372 Test: blockdev nvme passthru rw ...passed 00:17:18.372 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:46:15.364420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:18.372 [2024-07-25 11:46:15.364463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:18.372 passed 00:17:18.372 Test: blockdev nvme admin passthru ...passed 00:17:18.372 Test: blockdev copy ...passed 00:17:18.372 Suite: bdevio tests on: Nvme2n2 00:17:18.372 Test: blockdev write read block ...passed 00:17:18.372 Test: blockdev write zeroes read block ...passed 00:17:18.372 Test: blockdev write zeroes read no split ...passed 00:17:18.631 Test: blockdev write zeroes read split ...passed 00:17:18.631 Test: blockdev write zeroes read split partial ...passed 00:17:18.631 Test: blockdev reset ...[2024-07-25 11:46:15.436946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:17:18.631 [2024-07-25 11:46:15.441376] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.631 passed 00:17:18.631 Test: blockdev write read 8 blocks ...passed 00:17:18.631 Test: blockdev write read size > 128k ...passed 00:17:18.631 Test: blockdev write read invalid size ...passed 00:17:18.631 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.631 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.632 Test: blockdev write read max offset ...passed 00:17:18.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.632 Test: blockdev writev readv 8 blocks ...passed 00:17:18.632 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.632 Test: blockdev writev readv block ...passed 00:17:18.632 Test: blockdev writev readv size > 128k ...passed 00:17:18.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.632 Test: blockdev comparev and writev ...[2024-07-25 11:46:15.450243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28643a000 len:0x1000 00:17:18.632 [2024-07-25 11:46:15.450305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:18.632 passed 00:17:18.632 Test: blockdev nvme passthru rw ...passed 00:17:18.632 Test: blockdev nvme passthru vendor specific ...passed 00:17:18.632 Test: blockdev nvme admin passthru ...[2024-07-25 11:46:15.451186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:18.632 [2024-07-25 11:46:15.451241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:18.632 passed 00:17:18.632 Test: blockdev copy ...passed 00:17:18.632 Suite: bdevio tests on: Nvme2n1 00:17:18.632 Test: blockdev write read block ...passed 00:17:18.632 Test: blockdev write zeroes read block ...passed 00:17:18.632 Test: blockdev write zeroes read no split ...passed 00:17:18.632 Test: blockdev write zeroes read split ...passed 00:17:18.632 Test: blockdev write zeroes read split partial ...passed 00:17:18.632 Test: blockdev reset ...[2024-07-25 11:46:15.526462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:17:18.632 [2024-07-25 11:46:15.530826] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.632 passed 00:17:18.632 Test: blockdev write read 8 blocks ...passed 00:17:18.632 Test: blockdev write read size > 128k ...passed 00:17:18.632 Test: blockdev write read invalid size ...passed 00:17:18.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.632 Test: blockdev write read max offset ...passed 00:17:18.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.632 Test: blockdev writev readv 8 blocks ...passed 00:17:18.632 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.632 Test: blockdev writev readv block ...passed 00:17:18.632 Test: blockdev writev readv size > 128k ...passed 00:17:18.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.632 Test: blockdev comparev and writev ...[2024-07-25 11:46:15.539301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x286434000 len:0x1000 00:17:18.632 [2024-07-25 11:46:15.539365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:18.632 passed 00:17:18.632 Test: blockdev nvme passthru rw ...passed 00:17:18.632 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:46:15.540253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:18.632 [2024-07-25 11:46:15.540300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:18.632 passed 00:17:18.632 Test: blockdev nvme admin passthru ...passed 00:17:18.632 Test: blockdev copy ...passed 00:17:18.632 Suite: bdevio tests on: Nvme1n1 00:17:18.632 Test: blockdev write read block ...passed 00:17:18.632 Test: blockdev write zeroes read block ...passed 00:17:18.632 Test: blockdev write zeroes read no split ...passed 00:17:18.632 Test: blockdev write zeroes read split ...passed 00:17:18.632 Test: blockdev write zeroes read split partial ...passed 00:17:18.632 Test: blockdev reset ...[2024-07-25 11:46:15.616271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:17:18.632 [2024-07-25 11:46:15.619947] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.632 passed 00:17:18.632 Test: blockdev write read 8 blocks ...passed 00:17:18.632 Test: blockdev write read size > 128k ...passed 00:17:18.632 Test: blockdev write read invalid size ...passed 00:17:18.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.632 Test: blockdev write read max offset ...passed 00:17:18.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.632 Test: blockdev writev readv 8 blocks ...passed 00:17:18.632 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.632 Test: blockdev writev readv block ...passed 00:17:18.632 Test: blockdev writev readv size > 128k ...passed 00:17:18.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.632 Test: blockdev comparev and writev ...[2024-07-25 11:46:15.629413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x286430000 len:0x1000 00:17:18.632 [2024-07-25 11:46:15.629477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:18.632 passed 00:17:18.632 Test: blockdev nvme passthru rw ...passed 00:17:18.632 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:46:15.630351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:18.632 [2024-07-25 11:46:15.630395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:18.632 passed 00:17:18.632 Test: blockdev nvme admin passthru ...passed 00:17:18.632 Test: blockdev copy ...passed 00:17:18.632 Suite: bdevio tests on: Nvme0n1 00:17:18.632 Test: blockdev write read block ...passed 00:17:18.632 Test: blockdev write zeroes read block ...passed 00:17:18.632 Test: blockdev write zeroes read no split ...passed 00:17:18.891 Test: blockdev write zeroes read split ...passed 00:17:18.891 Test: blockdev write zeroes read split partial ...passed 00:17:18.891 Test: blockdev reset ...[2024-07-25 11:46:15.719329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:17:18.891 [2024-07-25 11:46:15.723258] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.891 passed 00:17:18.891 Test: blockdev write read 8 blocks ...passed 00:17:18.891 Test: blockdev write read size > 128k ...passed 00:17:18.891 Test: blockdev write read invalid size ...passed 00:17:18.891 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.891 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.891 Test: blockdev write read max offset ...passed 00:17:18.891 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.891 Test: blockdev writev readv 8 blocks ...passed 00:17:18.891 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.891 Test: blockdev writev readv block ...passed 00:17:18.891 Test: blockdev writev readv size > 128k ...passed 00:17:18.891 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.891 Test: blockdev comparev and writev ...passed 00:17:18.891 Test: blockdev nvme passthru rw ...[2024-07-25 11:46:15.732193] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:17:18.891 separate metadata which is not supported yet. 00:17:18.891 passed 00:17:18.891 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:46:15.732751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:17:18.891 [2024-07-25 11:46:15.732810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:17:18.891 passed 00:17:18.891 Test: blockdev nvme admin passthru ...passed 00:17:18.891 Test: blockdev copy ...passed 00:17:18.891 00:17:18.891 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.891 suites 6 6 n/a 0 0 00:17:18.891 tests 138 138 138 0 0 00:17:18.891 asserts 893 893 893 0 n/a 00:17:18.891 00:17:18.891 Elapsed time = 1.471 seconds 00:17:18.891 0 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 65272 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 65272 ']' 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 65272 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65272 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65272' 00:17:18.891 killing process with pid 65272 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 65272 00:17:18.891 11:46:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 65272 00:17:19.825 11:46:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:19.825 00:17:19.825 real 0m2.805s 00:17:19.825 user 0m6.904s 00:17:19.825 sys 0m0.377s 00:17:19.825 11:46:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.825 11:46:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:19.825 ************************************ 00:17:19.825 END TEST bdev_bounds 00:17:19.825 ************************************ 00:17:19.826 11:46:16 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:19.826 11:46:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:19.826 11:46:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.826 11:46:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.826 ************************************ 00:17:19.826 START TEST bdev_nbd 00:17:19.826 ************************************ 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=65326 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 65326 /var/tmp/spdk-nbd.sock 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 65326 ']' 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:19.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.826 11:46:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:20.083 [2024-07-25 11:46:16.899547] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:20.083 [2024-07-25 11:46:16.899960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.083 [2024-07-25 11:46:17.069672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.341 [2024-07-25 11:46:17.330023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:21.279 11:46:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:21.279 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.537 1+0 records in 00:17:21.537 1+0 records out 00:17:21.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715141 s, 5.7 MB/s 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:21.537 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.795 1+0 records in 00:17:21.795 1+0 records out 00:17:21.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631118 s, 6.5 MB/s 00:17:21.795 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.796 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:21.796 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.796 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:21.796 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:21.796 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:21.796 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:21.796 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.054 1+0 records in 00:17:22.054 1+0 records out 00:17:22.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063336 s, 6.5 MB/s 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:22.054 11:46:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.312 1+0 records in 00:17:22.312 1+0 records out 00:17:22.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106354 s, 3.9 MB/s 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:22.312 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.571 1+0 records in 00:17:22.571 1+0 records out 00:17:22.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761165 s, 5.4 MB/s 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:22.571 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.829 1+0 records in 00:17:22.829 1+0 records out 00:17:22.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630312 s, 6.5 MB/s 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:22.829 11:46:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:23.087 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd0", 00:17:23.087 "bdev_name": "Nvme0n1" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd1", 00:17:23.087 "bdev_name": "Nvme1n1" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd2", 00:17:23.087 "bdev_name": "Nvme2n1" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd3", 00:17:23.087 "bdev_name": "Nvme2n2" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd4", 00:17:23.087 "bdev_name": "Nvme2n3" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd5", 00:17:23.087 "bdev_name": "Nvme3n1" 00:17:23.087 } 00:17:23.087 ]' 00:17:23.087 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:23.087 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd0", 00:17:23.087 "bdev_name": "Nvme0n1" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd1", 00:17:23.087 "bdev_name": "Nvme1n1" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd2", 00:17:23.087 "bdev_name": "Nvme2n1" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd3", 00:17:23.087 "bdev_name": "Nvme2n2" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd4", 00:17:23.087 "bdev_name": "Nvme2n3" 00:17:23.087 }, 00:17:23.087 { 00:17:23.087 "nbd_device": "/dev/nbd5", 00:17:23.087 "bdev_name": "Nvme3n1" 00:17:23.087 } 00:17:23.087 ]' 00:17:23.087 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:23.345 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:23.345 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.345 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:23.345 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.345 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:23.345 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.345 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.603 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.861 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.119 11:46:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.377 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.634 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:24.892 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:25.150 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:25.151 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:25.151 11:46:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:25.151 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:17:25.409 /dev/nbd0 00:17:25.409 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.410 1+0 records in 00:17:25.410 1+0 records out 00:17:25.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635646 s, 6.4 MB/s 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:25.410 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:17:25.668 /dev/nbd1 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.668 1+0 records in 00:17:25.668 1+0 records out 00:17:25.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562131 s, 7.3 MB/s 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:25.668 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:17:25.927 /dev/nbd10 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.927 1+0 records in 00:17:25.927 1+0 records out 00:17:25.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529606 s, 7.7 MB/s 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:25.927 11:46:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:17:26.186 /dev/nbd11 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.186 1+0 records in 00:17:26.186 1+0 records out 00:17:26.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715329 s, 5.7 MB/s 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:26.186 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:17:26.760 /dev/nbd12 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.760 1+0 records in 00:17:26.760 1+0 records out 00:17:26.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00197916 s, 2.1 MB/s 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:26.760 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:17:27.018 /dev/nbd13 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.018 1+0 records in 00:17:27.018 1+0 records out 00:17:27.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634117 s, 6.5 MB/s 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.018 11:46:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd0", 00:17:27.277 "bdev_name": "Nvme0n1" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd1", 00:17:27.277 "bdev_name": "Nvme1n1" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd10", 00:17:27.277 "bdev_name": "Nvme2n1" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd11", 00:17:27.277 "bdev_name": "Nvme2n2" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd12", 00:17:27.277 "bdev_name": "Nvme2n3" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd13", 00:17:27.277 "bdev_name": "Nvme3n1" 00:17:27.277 } 00:17:27.277 ]' 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd0", 00:17:27.277 "bdev_name": "Nvme0n1" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd1", 00:17:27.277 "bdev_name": "Nvme1n1" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd10", 00:17:27.277 "bdev_name": "Nvme2n1" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd11", 00:17:27.277 "bdev_name": "Nvme2n2" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd12", 00:17:27.277 "bdev_name": "Nvme2n3" 00:17:27.277 }, 00:17:27.277 { 00:17:27.277 "nbd_device": "/dev/nbd13", 00:17:27.277 "bdev_name": "Nvme3n1" 00:17:27.277 } 00:17:27.277 ]' 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:27.277 /dev/nbd1 00:17:27.277 /dev/nbd10 00:17:27.277 /dev/nbd11 00:17:27.277 /dev/nbd12 00:17:27.277 /dev/nbd13' 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:27.277 /dev/nbd1 00:17:27.277 /dev/nbd10 00:17:27.277 /dev/nbd11 00:17:27.277 /dev/nbd12 00:17:27.277 /dev/nbd13' 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:27.277 256+0 records in 00:17:27.277 256+0 records out 00:17:27.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565674 s, 185 MB/s 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:27.277 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:27.536 256+0 records in 00:17:27.536 256+0 records out 00:17:27.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140486 s, 7.5 MB/s 00:17:27.536 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:27.536 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:27.536 256+0 records in 00:17:27.536 256+0 records out 00:17:27.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151773 s, 6.9 MB/s 00:17:27.536 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:27.536 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:27.793 256+0 records in 00:17:27.793 256+0 records out 00:17:27.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156351 s, 6.7 MB/s 00:17:27.793 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:27.793 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:27.793 256+0 records in 00:17:27.793 256+0 records out 00:17:27.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135172 s, 7.8 MB/s 00:17:27.793 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:27.793 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:28.051 256+0 records in 00:17:28.051 256+0 records out 00:17:28.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15783 s, 6.6 MB/s 00:17:28.051 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:28.051 11:46:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:28.309 256+0 records in 00:17:28.309 256+0 records out 00:17:28.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155902 s, 6.7 MB/s 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.310 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.577 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:28.852 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.853 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.111 11:46:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.369 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.628 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.886 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:30.145 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:30.145 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:30.145 11:46:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:30.145 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:30.145 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:30.145 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:30.145 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:30.145 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:30.145 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:30.145 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:17:30.146 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:30.404 malloc_lvol_verify 00:17:30.405 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:30.663 29cdd0fc-d546-4bc3-98ad-2c339da1b529 00:17:30.663 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:30.921 38ec1430-2107-4c67-820c-80837a27435d 00:17:30.921 11:46:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:31.181 /dev/nbd0 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:17:31.181 mke2fs 1.46.5 (30-Dec-2021) 00:17:31.181 Discarding device blocks: 0/4096 done 00:17:31.181 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:31.181 00:17:31.181 Allocating group tables: 0/1 done 00:17:31.181 Writing inode tables: 0/1 done 00:17:31.181 Creating journal (1024 blocks): done 00:17:31.181 Writing superblocks and filesystem accounting information: 0/1 done 00:17:31.181 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.181 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:31.439 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 65326 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 65326 ']' 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 65326 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65326 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.440 killing process with pid 65326 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65326' 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 65326 00:17:31.440 11:46:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 65326 00:17:32.815 11:46:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:32.815 00:17:32.815 real 0m12.742s 00:17:32.815 user 0m18.284s 00:17:32.815 sys 0m3.831s 00:17:32.815 ************************************ 00:17:32.815 END TEST bdev_nbd 00:17:32.815 ************************************ 00:17:32.815 11:46:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.815 11:46:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:32.815 11:46:29 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:32.815 11:46:29 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:17:32.815 skipping fio tests on NVMe due to multi-ns failures. 00:17:32.815 11:46:29 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:17:32.815 11:46:29 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:32.815 11:46:29 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:32.815 11:46:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:32.815 11:46:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.815 11:46:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.815 ************************************ 00:17:32.815 START TEST bdev_verify 00:17:32.815 ************************************ 00:17:32.815 11:46:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:32.815 [2024-07-25 11:46:29.685246] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:32.815 [2024-07-25 11:46:29.685431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65734 ] 00:17:33.073 [2024-07-25 11:46:29.872553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:33.073 [2024-07-25 11:46:30.059497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.073 [2024-07-25 11:46:30.059512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.039 Running I/O for 5 seconds... 00:17:39.299 00:17:39.299 Latency(us) 00:17:39.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.299 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x0 length 0xbd0bd 00:17:39.299 Nvme0n1 : 5.04 1625.89 6.35 0.00 0.00 78460.68 15966.95 74353.57 00:17:39.299 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:39.299 Nvme0n1 : 5.04 1625.18 6.35 0.00 0.00 78426.18 16086.11 70540.57 00:17:39.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x0 length 0xa0000 00:17:39.299 Nvme1n1 : 5.04 1625.36 6.35 0.00 0.00 78387.84 19065.02 70540.57 00:17:39.299 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0xa0000 length 0xa0000 00:17:39.299 Nvme1n1 : 5.07 1627.16 6.36 0.00 0.00 78101.56 7923.90 68157.44 00:17:39.299 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x0 length 0x80000 00:17:39.299 Nvme2n1 : 5.04 1624.51 6.35 0.00 0.00 78302.03 17754.30 68157.44 00:17:39.299 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x80000 length 0x80000 00:17:39.299 Nvme2n1 : 5.09 1634.35 6.38 0.00 0.00 77814.15 13524.25 65774.31 00:17:39.299 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x0 length 0x80000 00:17:39.299 Nvme2n2 : 5.06 1630.62 6.37 0.00 0.00 77922.52 4825.83 69110.69 00:17:39.299 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x80000 length 0x80000 00:17:39.299 Nvme2n2 : 5.09 1633.79 6.38 0.00 0.00 77704.13 13702.98 65297.69 00:17:39.299 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x0 length 0x80000 00:17:39.299 Nvme2n3 : 5.08 1639.07 6.40 0.00 0.00 77535.03 9234.62 70540.57 00:17:39.299 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x80000 length 0x80000 00:17:39.299 Nvme2n3 : 5.09 1633.23 6.38 0.00 0.00 77602.33 14000.87 68634.07 00:17:39.299 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x0 length 0x20000 00:17:39.299 Nvme3n1 : 5.08 1638.54 6.40 0.00 0.00 77433.33 8281.37 73400.32 00:17:39.299 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.299 Verification LBA range: start 0x20000 length 0x20000 00:17:39.299 Nvme3n1 : 5.10 1632.66 6.38 0.00 0.00 77532.45 10426.18 70540.57 00:17:39.299 =================================================================================================================== 00:17:39.299 Total : 19570.37 76.45 0.00 0.00 77932.72 4825.83 74353.57 00:17:40.234 00:17:40.234 real 0m7.567s 00:17:40.234 user 0m13.794s 00:17:40.234 sys 0m0.241s 00:17:40.234 11:46:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.234 ************************************ 00:17:40.234 END TEST bdev_verify 00:17:40.234 ************************************ 00:17:40.234 11:46:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:40.234 11:46:37 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:40.234 11:46:37 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:40.234 11:46:37 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.234 11:46:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.234 ************************************ 00:17:40.234 START TEST bdev_verify_big_io 00:17:40.234 ************************************ 00:17:40.234 11:46:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:40.506 [2024-07-25 11:46:37.291269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:40.506 [2024-07-25 11:46:37.291420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65837 ] 00:17:40.506 [2024-07-25 11:46:37.457297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:40.775 [2024-07-25 11:46:37.686333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.775 [2024-07-25 11:46:37.686345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.707 Running I/O for 5 seconds... 00:17:48.264 00:17:48.264 Latency(us) 00:17:48.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.264 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x0 length 0xbd0b 00:17:48.264 Nvme0n1 : 5.72 117.58 7.35 0.00 0.00 1045654.72 27048.49 1029510.98 00:17:48.264 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:48.264 Nvme0n1 : 5.75 122.51 7.66 0.00 0.00 1009328.03 20971.52 1037136.99 00:17:48.264 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x0 length 0xa000 00:17:48.264 Nvme1n1 : 5.77 121.94 7.62 0.00 0.00 988754.43 54096.99 880803.84 00:17:48.264 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0xa000 length 0xa000 00:17:48.264 Nvme1n1 : 5.75 122.45 7.65 0.00 0.00 973040.68 91988.71 857925.82 00:17:48.264 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x0 length 0x8000 00:17:48.264 Nvme2n1 : 5.78 121.88 7.62 0.00 0.00 957496.70 56003.49 915120.87 00:17:48.264 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x8000 length 0x8000 00:17:48.264 Nvme2n1 : 5.85 126.92 7.93 0.00 0.00 916618.52 55765.18 819795.78 00:17:48.264 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x0 length 0x8000 00:17:48.264 Nvme2n2 : 5.86 126.88 7.93 0.00 0.00 892262.58 32410.53 945624.90 00:17:48.264 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x8000 length 0x8000 00:17:48.264 Nvme2n2 : 5.85 131.31 8.21 0.00 0.00 865824.12 37415.10 850299.81 00:17:48.264 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x0 length 0x8000 00:17:48.264 Nvme2n3 : 5.86 131.09 8.19 0.00 0.00 842343.95 44564.48 983754.94 00:17:48.264 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x8000 length 0x8000 00:17:48.264 Nvme2n3 : 5.89 134.25 8.39 0.00 0.00 820243.94 40274.85 880803.84 00:17:48.264 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x0 length 0x2000 00:17:48.264 Nvme3n1 : 5.92 147.20 9.20 0.00 0.00 731185.98 6911.07 1014258.97 00:17:48.264 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.264 Verification LBA range: start 0x2000 length 0x2000 00:17:48.264 Nvme3n1 : 5.94 149.60 9.35 0.00 0.00 718411.18 2323.55 1151527.10 00:17:48.264 =================================================================================================================== 00:17:48.264 Total : 1553.59 97.10 0.00 0.00 888288.83 2323.55 1151527.10 00:17:49.200 00:17:49.200 real 0m9.018s 00:17:49.200 user 0m16.638s 00:17:49.200 sys 0m0.284s 00:17:49.200 11:46:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.200 11:46:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.200 ************************************ 00:17:49.200 END TEST bdev_verify_big_io 00:17:49.200 ************************************ 00:17:49.459 11:46:46 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.459 11:46:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:49.459 11:46:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:49.459 11:46:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:49.459 ************************************ 00:17:49.459 START TEST bdev_write_zeroes 00:17:49.459 ************************************ 00:17:49.459 11:46:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.459 [2024-07-25 11:46:46.378850] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:49.459 [2024-07-25 11:46:46.379031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65952 ] 00:17:49.717 [2024-07-25 11:46:46.552081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.975 [2024-07-25 11:46:46.770942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.541 Running I/O for 1 seconds... 00:17:51.474 00:17:51.474 Latency(us) 00:17:51.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.475 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:51.475 Nvme0n1 : 1.02 8228.73 32.14 0.00 0.00 15502.57 11200.70 38368.35 00:17:51.475 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:51.475 Nvme1n1 : 1.02 8216.26 32.09 0.00 0.00 15503.16 11975.21 40513.16 00:17:51.475 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:51.475 Nvme2n1 : 1.02 8203.94 32.05 0.00 0.00 15469.91 11856.06 40989.79 00:17:51.475 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:51.475 Nvme2n2 : 1.02 8191.60 32.00 0.00 0.00 15412.39 11736.90 39321.60 00:17:51.475 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:51.475 Nvme2n3 : 1.03 8233.73 32.16 0.00 0.00 15334.57 8996.31 38844.97 00:17:51.475 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:51.475 Nvme3n1 : 1.03 8221.32 32.11 0.00 0.00 15300.58 9294.20 38606.66 00:17:51.475 =================================================================================================================== 00:17:51.475 Total : 49295.59 192.56 0.00 0.00 15420.27 8996.31 40989.79 00:17:52.849 00:17:52.849 real 0m3.284s 00:17:52.849 user 0m2.924s 00:17:52.849 sys 0m0.229s 00:17:52.849 11:46:49 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.849 11:46:49 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:52.849 ************************************ 00:17:52.849 END TEST bdev_write_zeroes 00:17:52.849 ************************************ 00:17:52.849 11:46:49 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:52.849 11:46:49 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:52.849 11:46:49 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.849 11:46:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:52.849 ************************************ 00:17:52.849 START TEST bdev_json_nonenclosed 00:17:52.849 ************************************ 00:17:52.849 11:46:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:52.849 [2024-07-25 11:46:49.717280] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:52.849 [2024-07-25 11:46:49.717494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66010 ] 00:17:53.108 [2024-07-25 11:46:49.892132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.108 [2024-07-25 11:46:50.078103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.108 [2024-07-25 11:46:50.078205] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:53.108 [2024-07-25 11:46:50.078237] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:53.108 [2024-07-25 11:46:50.078254] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:53.675 00:17:53.675 real 0m0.876s 00:17:53.675 user 0m0.654s 00:17:53.675 sys 0m0.115s 00:17:53.675 11:46:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:53.675 ************************************ 00:17:53.675 END TEST bdev_json_nonenclosed 00:17:53.675 11:46:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:53.675 ************************************ 00:17:53.675 11:46:50 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:53.675 11:46:50 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:53.675 11:46:50 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:53.675 11:46:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.675 ************************************ 00:17:53.675 START TEST bdev_json_nonarray 00:17:53.675 ************************************ 00:17:53.675 11:46:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:53.675 [2024-07-25 11:46:50.640630] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:53.675 [2024-07-25 11:46:50.641385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66036 ] 00:17:53.934 [2024-07-25 11:46:50.809990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.192 [2024-07-25 11:46:50.997106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.192 [2024-07-25 11:46:50.997217] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:54.192 [2024-07-25 11:46:50.997261] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:54.192 [2024-07-25 11:46:50.997277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:54.451 00:17:54.451 real 0m0.872s 00:17:54.451 user 0m0.629s 00:17:54.451 sys 0m0.135s 00:17:54.451 11:46:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:54.451 11:46:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:54.451 ************************************ 00:17:54.451 END TEST bdev_json_nonarray 00:17:54.451 ************************************ 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:17:54.451 11:46:51 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:17:54.451 00:17:54.451 real 0m43.714s 00:17:54.451 user 1m5.960s 00:17:54.451 sys 0m6.220s 00:17:54.451 11:46:51 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:54.451 11:46:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.451 ************************************ 00:17:54.451 END TEST blockdev_nvme 00:17:54.451 ************************************ 00:17:54.709 11:46:51 -- spdk/autotest.sh@217 -- # uname -s 00:17:54.709 11:46:51 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:17:54.709 11:46:51 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:17:54.709 11:46:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:54.709 11:46:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:54.709 11:46:51 -- common/autotest_common.sh@10 -- # set +x 00:17:54.709 ************************************ 00:17:54.709 START TEST blockdev_nvme_gpt 00:17:54.709 ************************************ 00:17:54.709 11:46:51 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:17:54.709 * Looking for test storage... 00:17:54.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:54.709 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66112 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:54.710 11:46:51 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66112 00:17:54.710 11:46:51 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 66112 ']' 00:17:54.710 11:46:51 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.710 11:46:51 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.710 11:46:51 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.710 11:46:51 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.710 11:46:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:17:54.710 [2024-07-25 11:46:51.715254] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:54.710 [2024-07-25 11:46:51.715403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66112 ] 00:17:54.968 [2024-07-25 11:46:51.879536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.227 [2024-07-25 11:46:52.066828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.832 11:46:52 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:55.832 11:46:52 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:17:55.832 11:46:52 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:55.832 11:46:52 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:17:55.832 11:46:52 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:56.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:56.348 Waiting for block devices as requested 00:17:56.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:56.606 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:56.606 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:56.606 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:01.873 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:01.873 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:18:01.873 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:18:01.874 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:01.874 11:46:58 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:18:01.874 BYT; 00:18:01.874 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:18:01.874 BYT; 00:18:01.874 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:01.874 11:46:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:18:01.874 11:46:58 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:18:02.808 The operation has completed successfully. 00:18:02.808 11:46:59 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:18:04.183 The operation has completed successfully. 00:18:04.183 11:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:04.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:05.019 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:05.019 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:05.019 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:05.019 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:05.019 11:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:18:05.019 11:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.019 11:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.019 [] 00:18:05.019 11:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.019 11:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:18:05.019 11:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:18:05.019 11:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:05.019 11:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:05.019 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:18:05.019 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.019 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.586 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.586 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:18:05.586 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.586 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.586 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "56646437-311a-4300-a4ba-8c8adc72b6e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "56646437-311a-4300-a4ba-8c8adc72b6e3",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2107330c-f107-481f-a675-496dad2b7380"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2107330c-f107-481f-a675-496dad2b7380",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7ae044f5-2ba0-4ed6-afe5-3c02755bb765"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7ae044f5-2ba0-4ed6-afe5-3c02755bb765",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "426ccd72-2831-4f52-a847-1e18cfb4e750"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "426ccd72-2831-4f52-a847-1e18cfb4e750",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f68428b6-2419-4ca2-b0b2-b65a38068d2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f68428b6-2419-4ca2-b0b2-b65a38068d2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:05.587 11:47:02 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 66112 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 66112 ']' 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 66112 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:18:05.587 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.588 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66112 00:18:05.588 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.588 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.588 killing process with pid 66112 00:18:05.588 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66112' 00:18:05.588 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 66112 00:18:05.588 11:47:02 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 66112 00:18:08.121 11:47:04 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:08.121 11:47:04 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:08.121 11:47:04 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:08.121 11:47:04 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:08.121 11:47:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:08.121 ************************************ 00:18:08.121 START TEST bdev_hello_world 00:18:08.121 ************************************ 00:18:08.121 11:47:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:08.121 [2024-07-25 11:47:04.773976] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:08.121 [2024-07-25 11:47:04.774130] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66743 ] 00:18:08.121 [2024-07-25 11:47:04.933630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.121 [2024-07-25 11:47:05.120307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.055 [2024-07-25 11:47:05.726989] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:09.055 [2024-07-25 11:47:05.727055] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:09.055 [2024-07-25 11:47:05.727087] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:09.055 [2024-07-25 11:47:05.730067] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:09.055 [2024-07-25 11:47:05.730565] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:09.055 [2024-07-25 11:47:05.730608] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:09.055 [2024-07-25 11:47:05.730914] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:09.055 00:18:09.055 [2024-07-25 11:47:05.730962] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:10.023 00:18:10.023 real 0m2.176s 00:18:10.023 user 0m1.857s 00:18:10.023 sys 0m0.208s 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:10.023 ************************************ 00:18:10.023 END TEST bdev_hello_world 00:18:10.023 ************************************ 00:18:10.023 11:47:06 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:10.023 11:47:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:10.023 11:47:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.023 11:47:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:10.023 ************************************ 00:18:10.023 START TEST bdev_bounds 00:18:10.023 ************************************ 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66785 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:10.023 Process bdevio pid: 66785 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66785' 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66785 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 66785 ']' 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:10.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.023 11:47:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:10.023 [2024-07-25 11:47:06.975190] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:10.023 [2024-07-25 11:47:06.975333] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66785 ] 00:18:10.281 [2024-07-25 11:47:07.137515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:10.539 [2024-07-25 11:47:07.325834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.539 [2024-07-25 11:47:07.325961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.539 [2024-07-25 11:47:07.325976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.107 11:47:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.107 11:47:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:18:11.107 11:47:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:11.107 I/O targets: 00:18:11.107 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:11.107 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:18:11.107 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:18:11.107 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:11.107 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:11.107 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:11.107 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:11.107 00:18:11.107 00:18:11.107 CUnit - A unit testing framework for C - Version 2.1-3 00:18:11.107 http://cunit.sourceforge.net/ 00:18:11.107 00:18:11.107 00:18:11.107 Suite: bdevio tests on: Nvme3n1 00:18:11.107 Test: blockdev write read block ...passed 00:18:11.107 Test: blockdev write zeroes read block ...passed 00:18:11.107 Test: blockdev write zeroes read no split ...passed 00:18:11.107 Test: blockdev write zeroes read split ...passed 00:18:11.107 Test: blockdev write zeroes read split partial ...passed 00:18:11.107 Test: blockdev reset ...[2024-07-25 11:47:08.127718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:18:11.107 [2024-07-25 11:47:08.131516] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.107 passed 00:18:11.107 Test: blockdev write read 8 blocks ...passed 00:18:11.107 Test: blockdev write read size > 128k ...passed 00:18:11.107 Test: blockdev write read invalid size ...passed 00:18:11.107 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:11.107 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:11.107 Test: blockdev write read max offset ...passed 00:18:11.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.107 Test: blockdev writev readv 8 blocks ...passed 00:18:11.107 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.107 Test: blockdev writev readv block ...passed 00:18:11.107 Test: blockdev writev readv size > 128k ...passed 00:18:11.107 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.107 Test: blockdev comparev and writev ...[2024-07-25 11:47:08.139428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fc06000 len:0x1000 00:18:11.107 [2024-07-25 11:47:08.139489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:11.107 passed 00:18:11.107 Test: blockdev nvme passthru rw ...passed 00:18:11.107 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:47:08.140345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:11.107 [2024-07-25 11:47:08.140383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:11.107 passed 00:18:11.367 Test: blockdev nvme admin passthru ...passed 00:18:11.367 Test: blockdev copy ...passed 00:18:11.367 Suite: bdevio tests on: Nvme2n3 00:18:11.367 Test: blockdev write read block ...passed 00:18:11.367 Test: blockdev write zeroes read block ...passed 00:18:11.367 Test: blockdev write zeroes read no split ...passed 00:18:11.367 Test: blockdev write zeroes read split ...passed 00:18:11.367 Test: blockdev write zeroes read split partial ...passed 00:18:11.367 Test: blockdev reset ...[2024-07-25 11:47:08.206309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:11.367 [2024-07-25 11:47:08.210534] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.367 passed 00:18:11.367 Test: blockdev write read 8 blocks ...passed 00:18:11.367 Test: blockdev write read size > 128k ...passed 00:18:11.367 Test: blockdev write read invalid size ...passed 00:18:11.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:11.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:11.367 Test: blockdev write read max offset ...passed 00:18:11.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.367 Test: blockdev writev readv 8 blocks ...passed 00:18:11.367 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.367 Test: blockdev writev readv block ...passed 00:18:11.367 Test: blockdev writev readv size > 128k ...passed 00:18:11.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.367 Test: blockdev comparev and writev ...[2024-07-25 11:47:08.217862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28383c000 len:0x1000 00:18:11.367 [2024-07-25 11:47:08.217921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:11.367 passed 00:18:11.367 Test: blockdev nvme passthru rw ...passed 00:18:11.367 Test: blockdev nvme passthru vendor specific ...passed 00:18:11.367 Test: blockdev nvme admin passthru ...[2024-07-25 11:47:08.218717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:11.367 [2024-07-25 11:47:08.218753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:11.367 passed 00:18:11.367 Test: blockdev copy ...passed 00:18:11.367 Suite: bdevio tests on: Nvme2n2 00:18:11.367 Test: blockdev write read block ...passed 00:18:11.367 Test: blockdev write zeroes read block ...passed 00:18:11.367 Test: blockdev write zeroes read no split ...passed 00:18:11.367 Test: blockdev write zeroes read split ...passed 00:18:11.367 Test: blockdev write zeroes read split partial ...passed 00:18:11.367 Test: blockdev reset ...[2024-07-25 11:47:08.283216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:11.367 [2024-07-25 11:47:08.287425] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.367 passed 00:18:11.367 Test: blockdev write read 8 blocks ...passed 00:18:11.367 Test: blockdev write read size > 128k ...passed 00:18:11.367 Test: blockdev write read invalid size ...passed 00:18:11.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:11.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:11.367 Test: blockdev write read max offset ...passed 00:18:11.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.367 Test: blockdev writev readv 8 blocks ...passed 00:18:11.367 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.367 Test: blockdev writev readv block ...passed 00:18:11.367 Test: blockdev writev readv size > 128k ...passed 00:18:11.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.367 Test: blockdev comparev and writev ...[2024-07-25 11:47:08.294943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283836000 len:0x1000 00:18:11.367 [2024-07-25 11:47:08.295007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:11.367 passed 00:18:11.367 Test: blockdev nvme passthru rw ...passed 00:18:11.367 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:47:08.295957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:11.367 [2024-07-25 11:47:08.295992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:11.367 passed 00:18:11.367 Test: blockdev nvme admin passthru ...passed 00:18:11.367 Test: blockdev copy ...passed 00:18:11.367 Suite: bdevio tests on: Nvme2n1 00:18:11.367 Test: blockdev write read block ...passed 00:18:11.367 Test: blockdev write zeroes read block ...passed 00:18:11.367 Test: blockdev write zeroes read no split ...passed 00:18:11.367 Test: blockdev write zeroes read split ...passed 00:18:11.367 Test: blockdev write zeroes read split partial ...passed 00:18:11.367 Test: blockdev reset ...[2024-07-25 11:47:08.359970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:11.367 [2024-07-25 11:47:08.364148] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.367 passed 00:18:11.367 Test: blockdev write read 8 blocks ...passed 00:18:11.367 Test: blockdev write read size > 128k ...passed 00:18:11.367 Test: blockdev write read invalid size ...passed 00:18:11.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:11.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:11.367 Test: blockdev write read max offset ...passed 00:18:11.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.367 Test: blockdev writev readv 8 blocks ...passed 00:18:11.367 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.367 Test: blockdev writev readv block ...passed 00:18:11.367 Test: blockdev writev readv size > 128k ...passed 00:18:11.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.367 Test: blockdev comparev and writev ...[2024-07-25 11:47:08.371805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283832000 len:0x1000 00:18:11.367 [2024-07-25 11:47:08.371864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:11.367 passed 00:18:11.367 Test: blockdev nvme passthru rw ...passed 00:18:11.367 Test: blockdev nvme passthru vendor specific ...passed 00:18:11.367 Test: blockdev nvme admin passthru ...[2024-07-25 11:47:08.372731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:11.367 [2024-07-25 11:47:08.372771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:11.367 passed 00:18:11.367 Test: blockdev copy ...passed 00:18:11.367 Suite: bdevio tests on: Nvme1n1p2 00:18:11.367 Test: blockdev write read block ...passed 00:18:11.367 Test: blockdev write zeroes read block ...passed 00:18:11.367 Test: blockdev write zeroes read no split ...passed 00:18:11.626 Test: blockdev write zeroes read split ...passed 00:18:11.626 Test: blockdev write zeroes read split partial ...passed 00:18:11.626 Test: blockdev reset ...[2024-07-25 11:47:08.438826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:18:11.626 [2024-07-25 11:47:08.442589] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.626 passed 00:18:11.626 Test: blockdev write read 8 blocks ...passed 00:18:11.626 Test: blockdev write read size > 128k ...passed 00:18:11.626 Test: blockdev write read invalid size ...passed 00:18:11.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:11.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:11.626 Test: blockdev write read max offset ...passed 00:18:11.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.626 Test: blockdev writev readv 8 blocks ...passed 00:18:11.626 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.626 Test: blockdev writev readv block ...passed 00:18:11.626 Test: blockdev writev readv size > 128k ...passed 00:18:11.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.627 Test: blockdev comparev and writev ...[2024-07-25 11:47:08.450497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x28382e000 len:0x1000 00:18:11.627 [2024-07-25 11:47:08.450566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:11.627 passed 00:18:11.627 Test: blockdev nvme passthru rw ...passed 00:18:11.627 Test: blockdev nvme passthru vendor specific ...passed 00:18:11.627 Test: blockdev nvme admin passthru ...passed 00:18:11.627 Test: blockdev copy ...passed 00:18:11.627 Suite: bdevio tests on: Nvme1n1p1 00:18:11.627 Test: blockdev write read block ...passed 00:18:11.627 Test: blockdev write zeroes read block ...passed 00:18:11.627 Test: blockdev write zeroes read no split ...passed 00:18:11.627 Test: blockdev write zeroes read split ...passed 00:18:11.627 Test: blockdev write zeroes read split partial ...passed 00:18:11.627 Test: blockdev reset ...[2024-07-25 11:47:08.517656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:18:11.627 [2024-07-25 11:47:08.521141] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.627 passed 00:18:11.627 Test: blockdev write read 8 blocks ...passed 00:18:11.627 Test: blockdev write read size > 128k ...passed 00:18:11.627 Test: blockdev write read invalid size ...passed 00:18:11.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:11.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:11.627 Test: blockdev write read max offset ...passed 00:18:11.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.627 Test: blockdev writev readv 8 blocks ...passed 00:18:11.627 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.627 Test: blockdev writev readv block ...passed 00:18:11.627 Test: blockdev writev readv size > 128k ...passed 00:18:11.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.627 Test: blockdev comparev and writev ...[2024-07-25 11:47:08.529184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x280c0e000 len:0x1000 00:18:11.627 [2024-07-25 11:47:08.529238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:11.627 passed 00:18:11.627 Test: blockdev nvme passthru rw ...passed 00:18:11.627 Test: blockdev nvme passthru vendor specific ...passed 00:18:11.627 Test: blockdev nvme admin passthru ...passed 00:18:11.627 Test: blockdev copy ...passed 00:18:11.627 Suite: bdevio tests on: Nvme0n1 00:18:11.627 Test: blockdev write read block ...passed 00:18:11.627 Test: blockdev write zeroes read block ...passed 00:18:11.627 Test: blockdev write zeroes read no split ...passed 00:18:11.627 Test: blockdev write zeroes read split ...passed 00:18:11.627 Test: blockdev write zeroes read split partial ...passed 00:18:11.627 Test: blockdev reset ...[2024-07-25 11:47:08.593583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:11.627 [2024-07-25 11:47:08.597121] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.627 passed 00:18:11.627 Test: blockdev write read 8 blocks ...passed 00:18:11.627 Test: blockdev write read size > 128k ...passed 00:18:11.627 Test: blockdev write read invalid size ...passed 00:18:11.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:11.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:11.627 Test: blockdev write read max offset ...passed 00:18:11.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.627 Test: blockdev writev readv 8 blocks ...passed 00:18:11.627 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.627 Test: blockdev writev readv block ...passed 00:18:11.627 Test: blockdev writev readv size > 128k ...passed 00:18:11.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.627 Test: blockdev comparev and writev ...passed 00:18:11.627 Test: blockdev nvme passthru rw ...[2024-07-25 11:47:08.604143] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:18:11.627 separate metadata which is not supported yet. 00:18:11.627 passed 00:18:11.627 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:47:08.604713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:18:11.627 [2024-07-25 11:47:08.604756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:18:11.627 passed 00:18:11.627 Test: blockdev nvme admin passthru ...passed 00:18:11.627 Test: blockdev copy ...passed 00:18:11.627 00:18:11.627 Run Summary: Type Total Ran Passed Failed Inactive 00:18:11.627 suites 7 7 n/a 0 0 00:18:11.627 tests 161 161 161 0 0 00:18:11.627 asserts 1025 1025 1025 0 n/a 00:18:11.627 00:18:11.627 Elapsed time = 1.492 seconds 00:18:11.627 0 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66785 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 66785 ']' 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 66785 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66785 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.627 killing process with pid 66785 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66785' 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 66785 00:18:11.627 11:47:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 66785 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:13.004 00:18:13.004 real 0m2.731s 00:18:13.004 user 0m6.750s 00:18:13.004 sys 0m0.336s 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:13.004 ************************************ 00:18:13.004 END TEST bdev_bounds 00:18:13.004 ************************************ 00:18:13.004 11:47:09 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:13.004 11:47:09 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:13.004 11:47:09 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:13.004 11:47:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:13.004 ************************************ 00:18:13.004 START TEST bdev_nbd 00:18:13.004 ************************************ 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=66850 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 66850 /var/tmp/spdk-nbd.sock 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 66850 ']' 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.004 11:47:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:13.004 [2024-07-25 11:47:09.774445] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:13.004 [2024-07-25 11:47:09.774641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.004 [2024-07-25 11:47:09.948059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.263 [2024-07-25 11:47:10.169940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:13.829 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.830 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:13.830 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:13.830 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:13.830 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:13.830 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:13.830 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:13.830 11:47:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.088 1+0 records in 00:18:14.088 1+0 records out 00:18:14.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444941 s, 9.2 MB/s 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:14.088 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.655 1+0 records in 00:18:14.655 1+0 records out 00:18:14.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439497 s, 9.3 MB/s 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:14.655 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.914 1+0 records in 00:18:14.914 1+0 records out 00:18:14.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508775 s, 8.1 MB/s 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:14.914 11:47:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:15.173 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.174 1+0 records in 00:18:15.174 1+0 records out 00:18:15.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101886 s, 4.0 MB/s 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:15.174 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:18:15.432 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.433 1+0 records in 00:18:15.433 1+0 records out 00:18:15.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809113 s, 5.1 MB/s 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:15.433 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:15.691 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.692 1+0 records in 00:18:15.692 1+0 records out 00:18:15.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935228 s, 4.4 MB/s 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:15.692 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:15.950 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.208 1+0 records in 00:18:16.208 1+0 records out 00:18:16.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00238454 s, 1.7 MB/s 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:16.208 11:47:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:16.466 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:16.466 { 00:18:16.466 "nbd_device": "/dev/nbd0", 00:18:16.466 "bdev_name": "Nvme0n1" 00:18:16.466 }, 00:18:16.466 { 00:18:16.466 "nbd_device": "/dev/nbd1", 00:18:16.466 "bdev_name": "Nvme1n1p1" 00:18:16.466 }, 00:18:16.466 { 00:18:16.466 "nbd_device": "/dev/nbd2", 00:18:16.466 "bdev_name": "Nvme1n1p2" 00:18:16.466 }, 00:18:16.466 { 00:18:16.466 "nbd_device": "/dev/nbd3", 00:18:16.466 "bdev_name": "Nvme2n1" 00:18:16.466 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd4", 00:18:16.467 "bdev_name": "Nvme2n2" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd5", 00:18:16.467 "bdev_name": "Nvme2n3" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd6", 00:18:16.467 "bdev_name": "Nvme3n1" 00:18:16.467 } 00:18:16.467 ]' 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd0", 00:18:16.467 "bdev_name": "Nvme0n1" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd1", 00:18:16.467 "bdev_name": "Nvme1n1p1" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd2", 00:18:16.467 "bdev_name": "Nvme1n1p2" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd3", 00:18:16.467 "bdev_name": "Nvme2n1" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd4", 00:18:16.467 "bdev_name": "Nvme2n2" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd5", 00:18:16.467 "bdev_name": "Nvme2n3" 00:18:16.467 }, 00:18:16.467 { 00:18:16.467 "nbd_device": "/dev/nbd6", 00:18:16.467 "bdev_name": "Nvme3n1" 00:18:16.467 } 00:18:16.467 ]' 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.467 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.725 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.984 11:47:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.241 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.498 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.755 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.011 11:47:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.269 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:18.527 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:18:18.785 /dev/nbd0 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.785 1+0 records in 00:18:18.785 1+0 records out 00:18:18.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664088 s, 6.2 MB/s 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:18.785 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:18:19.043 /dev/nbd1 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.043 1+0 records in 00:18:19.043 1+0 records out 00:18:19.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460543 s, 8.9 MB/s 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:19.043 11:47:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:18:19.311 /dev/nbd10 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.311 1+0 records in 00:18:19.311 1+0 records out 00:18:19.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707585 s, 5.8 MB/s 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:19.311 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:18:19.569 /dev/nbd11 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.569 1+0 records in 00:18:19.569 1+0 records out 00:18:19.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786409 s, 5.2 MB/s 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:19.569 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:18:19.827 /dev/nbd12 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.827 1+0 records in 00:18:19.827 1+0 records out 00:18:19.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059908 s, 6.8 MB/s 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:19.827 11:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:18:20.085 /dev/nbd13 00:18:20.085 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:20.085 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:20.085 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:18:20.085 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:20.085 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:20.085 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:20.085 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:20.343 1+0 records in 00:18:20.343 1+0 records out 00:18:20.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763792 s, 5.4 MB/s 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:18:20.343 /dev/nbd14 00:18:20.343 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:20.601 1+0 records in 00:18:20.601 1+0 records out 00:18:20.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903693 s, 4.5 MB/s 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.601 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd0", 00:18:20.859 "bdev_name": "Nvme0n1" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd1", 00:18:20.859 "bdev_name": "Nvme1n1p1" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd10", 00:18:20.859 "bdev_name": "Nvme1n1p2" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd11", 00:18:20.859 "bdev_name": "Nvme2n1" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd12", 00:18:20.859 "bdev_name": "Nvme2n2" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd13", 00:18:20.859 "bdev_name": "Nvme2n3" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd14", 00:18:20.859 "bdev_name": "Nvme3n1" 00:18:20.859 } 00:18:20.859 ]' 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd0", 00:18:20.859 "bdev_name": "Nvme0n1" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd1", 00:18:20.859 "bdev_name": "Nvme1n1p1" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd10", 00:18:20.859 "bdev_name": "Nvme1n1p2" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd11", 00:18:20.859 "bdev_name": "Nvme2n1" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd12", 00:18:20.859 "bdev_name": "Nvme2n2" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd13", 00:18:20.859 "bdev_name": "Nvme2n3" 00:18:20.859 }, 00:18:20.859 { 00:18:20.859 "nbd_device": "/dev/nbd14", 00:18:20.859 "bdev_name": "Nvme3n1" 00:18:20.859 } 00:18:20.859 ]' 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:20.859 /dev/nbd1 00:18:20.859 /dev/nbd10 00:18:20.859 /dev/nbd11 00:18:20.859 /dev/nbd12 00:18:20.859 /dev/nbd13 00:18:20.859 /dev/nbd14' 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:20.859 /dev/nbd1 00:18:20.859 /dev/nbd10 00:18:20.859 /dev/nbd11 00:18:20.859 /dev/nbd12 00:18:20.859 /dev/nbd13 00:18:20.859 /dev/nbd14' 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:20.859 256+0 records in 00:18:20.859 256+0 records out 00:18:20.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655182 s, 160 MB/s 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:20.859 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:21.118 256+0 records in 00:18:21.118 256+0 records out 00:18:21.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167669 s, 6.3 MB/s 00:18:21.118 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:21.118 11:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:21.118 256+0 records in 00:18:21.118 256+0 records out 00:18:21.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160382 s, 6.5 MB/s 00:18:21.118 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:21.118 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:21.376 256+0 records in 00:18:21.376 256+0 records out 00:18:21.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170927 s, 6.1 MB/s 00:18:21.376 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:21.376 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:21.633 256+0 records in 00:18:21.633 256+0 records out 00:18:21.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166749 s, 6.3 MB/s 00:18:21.633 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:21.633 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:21.633 256+0 records in 00:18:21.633 256+0 records out 00:18:21.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170501 s, 6.1 MB/s 00:18:21.633 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:21.633 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:21.891 256+0 records in 00:18:21.891 256+0 records out 00:18:21.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157584 s, 6.7 MB/s 00:18:21.891 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:21.891 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:18:22.150 256+0 records in 00:18:22.150 256+0 records out 00:18:22.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176058 s, 6.0 MB/s 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.150 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.408 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.666 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.924 11:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.182 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.469 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.727 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:23.985 11:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:24.243 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:24.243 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:24.243 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:18:24.501 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:24.758 malloc_lvol_verify 00:18:24.758 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:25.016 6bf90769-245a-48fc-9242-5f978fba5a27 00:18:25.016 11:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:25.016 7a551ac0-8a10-4890-9a32-368a22647a56 00:18:25.016 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:25.274 /dev/nbd0 00:18:25.274 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:18:25.274 mke2fs 1.46.5 (30-Dec-2021) 00:18:25.274 Discarding device blocks: 0/4096 done 00:18:25.274 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:25.274 00:18:25.274 Allocating group tables: 0/1 done 00:18:25.274 Writing inode tables: 0/1 done 00:18:25.274 Creating journal (1024 blocks): done 00:18:25.274 Writing superblocks and filesystem accounting information: 0/1 done 00:18:25.274 00:18:25.274 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:18:25.274 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:25.274 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 66850 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 66850 ']' 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 66850 00:18:25.532 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:18:25.789 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.789 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66850 00:18:25.789 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:25.789 killing process with pid 66850 00:18:25.789 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:25.789 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66850' 00:18:25.789 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 66850 00:18:25.789 11:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 66850 00:18:26.724 11:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:26.724 00:18:26.724 real 0m14.081s 00:18:26.724 user 0m19.912s 00:18:26.724 sys 0m4.479s 00:18:26.724 11:47:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.724 ************************************ 00:18:26.724 END TEST bdev_nbd 00:18:26.725 ************************************ 00:18:26.725 11:47:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:26.983 11:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:26.983 11:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:18:26.983 11:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:18:26.983 skipping fio tests on NVMe due to multi-ns failures. 00:18:26.983 11:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:26.983 11:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:26.983 11:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:26.983 11:47:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:26.983 11:47:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.983 11:47:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:26.983 ************************************ 00:18:26.983 START TEST bdev_verify 00:18:26.983 ************************************ 00:18:26.983 11:47:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:26.983 [2024-07-25 11:47:23.889412] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:26.983 [2024-07-25 11:47:23.889568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67294 ] 00:18:27.242 [2024-07-25 11:47:24.051480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:27.242 [2024-07-25 11:47:24.237645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.242 [2024-07-25 11:47:24.237673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.179 Running I/O for 5 seconds... 00:18:33.443 00:18:33.443 Latency(us) 00:18:33.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.443 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x0 length 0xbd0bd 00:18:33.443 Nvme0n1 : 5.08 1387.11 5.42 0.00 0.00 92013.24 21448.15 94371.84 00:18:33.443 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:33.443 Nvme0n1 : 5.09 1332.83 5.21 0.00 0.00 95806.55 19184.17 98184.84 00:18:33.443 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x0 length 0x4ff80 00:18:33.443 Nvme1n1p1 : 5.08 1386.64 5.42 0.00 0.00 91766.89 23354.65 89128.96 00:18:33.443 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x4ff80 length 0x4ff80 00:18:33.443 Nvme1n1p1 : 5.09 1331.81 5.20 0.00 0.00 95706.03 20733.21 94371.84 00:18:33.443 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x0 length 0x4ff7f 00:18:33.443 Nvme1n1p2 : 5.08 1386.14 5.41 0.00 0.00 91590.17 22997.18 83409.45 00:18:33.443 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:18:33.443 Nvme1n1p2 : 5.10 1330.61 5.20 0.00 0.00 95600.20 22878.02 90558.84 00:18:33.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x0 length 0x80000 00:18:33.443 Nvme2n1 : 5.08 1385.68 5.41 0.00 0.00 91389.69 22282.24 86269.21 00:18:33.443 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x80000 length 0x80000 00:18:33.443 Nvme2n1 : 5.10 1330.15 5.20 0.00 0.00 95421.98 21924.77 95801.72 00:18:33.443 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x0 length 0x80000 00:18:33.443 Nvme2n2 : 5.08 1385.23 5.41 0.00 0.00 91201.41 21805.61 88652.33 00:18:33.443 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.443 Verification LBA range: start 0x80000 length 0x80000 00:18:33.444 Nvme2n2 : 5.10 1329.65 5.19 0.00 0.00 95277.49 21686.46 96754.97 00:18:33.444 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.444 Verification LBA range: start 0x0 length 0x80000 00:18:33.444 Nvme2n3 : 5.09 1395.56 5.45 0.00 0.00 90400.49 3515.11 91988.71 00:18:33.444 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.444 Verification LBA range: start 0x80000 length 0x80000 00:18:33.444 Nvme2n3 : 5.10 1329.18 5.19 0.00 0.00 95112.07 20852.36 97708.22 00:18:33.444 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.444 Verification LBA range: start 0x0 length 0x20000 00:18:33.444 Nvme3n1 : 5.09 1394.61 5.45 0.00 0.00 90311.26 5332.25 94848.47 00:18:33.444 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.444 Verification LBA range: start 0x20000 length 0x20000 00:18:33.444 Nvme3n1 : 5.11 1328.72 5.19 0.00 0.00 94946.23 12868.89 99138.09 00:18:33.444 =================================================================================================================== 00:18:33.444 Total : 19033.92 74.35 0.00 0.00 93282.06 3515.11 99138.09 00:18:34.816 ************************************ 00:18:34.816 00:18:34.816 real 0m7.699s 00:18:34.816 user 0m14.090s 00:18:34.816 sys 0m0.247s 00:18:34.816 11:47:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.816 11:47:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:34.816 END TEST bdev_verify 00:18:34.816 ************************************ 00:18:34.816 11:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:34.816 11:47:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:34.816 11:47:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.816 11:47:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:34.816 ************************************ 00:18:34.816 START TEST bdev_verify_big_io 00:18:34.816 ************************************ 00:18:34.816 11:47:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:34.816 [2024-07-25 11:47:31.662120] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:34.816 [2024-07-25 11:47:31.662295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67398 ] 00:18:34.816 [2024-07-25 11:47:31.832395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:35.073 [2024-07-25 11:47:32.017916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.073 [2024-07-25 11:47:32.017929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.005 Running I/O for 5 seconds... 00:18:42.589 00:18:42.589 Latency(us) 00:18:42.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.589 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.589 Verification LBA range: start 0x0 length 0xbd0b 00:18:42.589 Nvme0n1 : 5.71 104.47 6.53 0.00 0.00 1176528.34 51952.17 1258291.20 00:18:42.589 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:42.590 Nvme0n1 : 5.85 100.88 6.30 0.00 0.00 1223844.71 20494.89 1814989.73 00:18:42.590 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x0 length 0x4ff8 00:18:42.590 Nvme1n1p1 : 5.97 85.73 5.36 0.00 0.00 1384636.97 157286.40 2104778.01 00:18:42.590 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x4ff8 length 0x4ff8 00:18:42.590 Nvme1n1p1 : 5.85 99.64 6.23 0.00 0.00 1191754.12 34793.66 1837867.75 00:18:42.590 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x0 length 0x4ff7 00:18:42.590 Nvme1n1p2 : 5.97 83.02 5.19 0.00 0.00 1390620.11 153473.40 2150534.05 00:18:42.590 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x4ff7 length 0x4ff7 00:18:42.590 Nvme1n1p2 : 5.85 101.13 6.32 0.00 0.00 1142803.73 53382.05 1868371.78 00:18:42.590 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x0 length 0x8000 00:18:42.590 Nvme2n1 : 5.98 109.77 6.86 0.00 0.00 1036195.73 120586.24 1113397.06 00:18:42.590 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x8000 length 0x8000 00:18:42.590 Nvme2n1 : 5.98 103.18 6.45 0.00 0.00 1081484.01 71970.44 1891249.80 00:18:42.590 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x0 length 0x8000 00:18:42.590 Nvme2n2 : 5.99 117.60 7.35 0.00 0.00 956344.51 7238.75 1166779.11 00:18:42.590 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x8000 length 0x8000 00:18:42.590 Nvme2n2 : 6.05 114.82 7.18 0.00 0.00 955685.64 25022.84 1921753.83 00:18:42.590 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x0 length 0x8000 00:18:42.590 Nvme2n3 : 6.00 123.52 7.72 0.00 0.00 888869.39 3202.33 1197283.14 00:18:42.590 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x8000 length 0x8000 00:18:42.590 Nvme2n3 : 6.06 118.73 7.42 0.00 0.00 894717.46 23712.12 1952257.86 00:18:42.590 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x0 length 0x2000 00:18:42.590 Nvme3n1 : 6.00 123.81 7.74 0.00 0.00 861736.84 4527.94 1235413.18 00:18:42.590 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.590 Verification LBA range: start 0x2000 length 0x2000 00:18:42.590 Nvme3n1 : 6.10 138.66 8.67 0.00 0.00 756802.03 960.70 1982761.89 00:18:42.590 =================================================================================================================== 00:18:42.590 Total : 1524.96 95.31 0.00 0.00 1040732.74 960.70 2150534.05 00:18:43.984 00:18:43.984 real 0m9.203s 00:18:43.984 user 0m17.017s 00:18:43.984 sys 0m0.290s 00:18:43.984 11:47:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.984 11:47:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.984 ************************************ 00:18:43.984 END TEST bdev_verify_big_io 00:18:43.984 ************************************ 00:18:43.984 11:47:40 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:43.984 11:47:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:43.984 11:47:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.984 11:47:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:43.984 ************************************ 00:18:43.984 START TEST bdev_write_zeroes 00:18:43.984 ************************************ 00:18:43.984 11:47:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:43.984 [2024-07-25 11:47:40.903711] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:43.984 [2024-07-25 11:47:40.903856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67518 ] 00:18:44.242 [2024-07-25 11:47:41.066979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.242 [2024-07-25 11:47:41.251941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.175 Running I/O for 1 seconds... 00:18:46.116 00:18:46.116 Latency(us) 00:18:46.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.116 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:46.116 Nvme0n1 : 1.02 6918.50 27.03 0.00 0.00 18424.60 12809.31 30980.65 00:18:46.116 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:46.116 Nvme1n1p1 : 1.02 6908.30 26.99 0.00 0.00 18417.71 13405.09 30742.34 00:18:46.116 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:46.116 Nvme1n1p2 : 1.02 6938.51 27.10 0.00 0.00 18285.78 10783.65 28001.75 00:18:46.116 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:46.116 Nvme2n1 : 1.03 6929.47 27.07 0.00 0.00 18241.35 11141.12 26571.87 00:18:46.116 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:46.116 Nvme2n2 : 1.03 6920.33 27.03 0.00 0.00 18217.71 11260.28 25380.31 00:18:46.116 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:46.116 Nvme2n3 : 1.03 6966.03 27.21 0.00 0.00 18060.28 6821.70 25380.31 00:18:46.116 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:46.117 Nvme3n1 : 1.03 6957.16 27.18 0.00 0.00 18047.30 7000.44 25856.93 00:18:46.117 =================================================================================================================== 00:18:46.117 Total : 48538.30 189.60 0.00 0.00 18241.16 6821.70 30980.65 00:18:47.496 00:18:47.496 real 0m3.300s 00:18:47.496 user 0m2.944s 00:18:47.496 sys 0m0.228s 00:18:47.496 11:47:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:47.496 ************************************ 00:18:47.496 END TEST bdev_write_zeroes 00:18:47.496 ************************************ 00:18:47.496 11:47:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:47.496 11:47:44 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:47.496 11:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:47.496 11:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:47.496 11:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:47.496 ************************************ 00:18:47.496 START TEST bdev_json_nonenclosed 00:18:47.496 ************************************ 00:18:47.496 11:47:44 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:47.496 [2024-07-25 11:47:44.260946] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:47.497 [2024-07-25 11:47:44.261104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67571 ] 00:18:47.497 [2024-07-25 11:47:44.423508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.755 [2024-07-25 11:47:44.607415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.755 [2024-07-25 11:47:44.607561] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:47.755 [2024-07-25 11:47:44.607592] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:47.755 [2024-07-25 11:47:44.607609] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:48.014 00:18:48.014 real 0m0.858s 00:18:48.014 user 0m0.631s 00:18:48.014 sys 0m0.120s 00:18:48.014 11:47:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:48.014 11:47:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:48.014 ************************************ 00:18:48.014 END TEST bdev_json_nonenclosed 00:18:48.014 ************************************ 00:18:48.273 11:47:45 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.273 11:47:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:48.273 11:47:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:48.273 11:47:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:48.273 ************************************ 00:18:48.273 START TEST bdev_json_nonarray 00:18:48.273 ************************************ 00:18:48.273 11:47:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.273 [2024-07-25 11:47:45.166893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:48.273 [2024-07-25 11:47:45.167080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67601 ] 00:18:48.532 [2024-07-25 11:47:45.339290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.532 [2024-07-25 11:47:45.528468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.532 [2024-07-25 11:47:45.528594] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:48.532 [2024-07-25 11:47:45.528625] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:48.532 [2024-07-25 11:47:45.528643] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:49.099 00:18:49.099 real 0m0.880s 00:18:49.099 user 0m0.635s 00:18:49.099 sys 0m0.139s 00:18:49.099 11:47:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.099 11:47:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:49.099 ************************************ 00:18:49.099 END TEST bdev_json_nonarray 00:18:49.099 ************************************ 00:18:49.099 11:47:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:18:49.099 11:47:45 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:18:49.099 11:47:45 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:18:49.099 11:47:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:49.099 11:47:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.100 11:47:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:49.100 ************************************ 00:18:49.100 START TEST bdev_gpt_uuid 00:18:49.100 ************************************ 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67628 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67628 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:49.100 11:47:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67628 ']' 00:18:49.100 11:47:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.100 11:47:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.100 11:47:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.100 11:47:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.100 11:47:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:49.100 [2024-07-25 11:47:46.118993] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:49.100 [2024-07-25 11:47:46.119156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67628 ] 00:18:49.358 [2024-07-25 11:47:46.289566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.616 [2024-07-25 11:47:46.476983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.183 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.183 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:18:50.183 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:50.183 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.183 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.750 Some configs were skipped because the RPC state that can call them passed over. 00:18:50.750 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:18:50.751 { 00:18:50.751 "name": "Nvme1n1p1", 00:18:50.751 "aliases": [ 00:18:50.751 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:18:50.751 ], 00:18:50.751 "product_name": "GPT Disk", 00:18:50.751 "block_size": 4096, 00:18:50.751 "num_blocks": 655104, 00:18:50.751 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:50.751 "assigned_rate_limits": { 00:18:50.751 "rw_ios_per_sec": 0, 00:18:50.751 "rw_mbytes_per_sec": 0, 00:18:50.751 "r_mbytes_per_sec": 0, 00:18:50.751 "w_mbytes_per_sec": 0 00:18:50.751 }, 00:18:50.751 "claimed": false, 00:18:50.751 "zoned": false, 00:18:50.751 "supported_io_types": { 00:18:50.751 "read": true, 00:18:50.751 "write": true, 00:18:50.751 "unmap": true, 00:18:50.751 "flush": true, 00:18:50.751 "reset": true, 00:18:50.751 "nvme_admin": false, 00:18:50.751 "nvme_io": false, 00:18:50.751 "nvme_io_md": false, 00:18:50.751 "write_zeroes": true, 00:18:50.751 "zcopy": false, 00:18:50.751 "get_zone_info": false, 00:18:50.751 "zone_management": false, 00:18:50.751 "zone_append": false, 00:18:50.751 "compare": true, 00:18:50.751 "compare_and_write": false, 00:18:50.751 "abort": true, 00:18:50.751 "seek_hole": false, 00:18:50.751 "seek_data": false, 00:18:50.751 "copy": true, 00:18:50.751 "nvme_iov_md": false 00:18:50.751 }, 00:18:50.751 "driver_specific": { 00:18:50.751 "gpt": { 00:18:50.751 "base_bdev": "Nvme1n1", 00:18:50.751 "offset_blocks": 256, 00:18:50.751 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:18:50.751 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:50.751 "partition_name": "SPDK_TEST_first" 00:18:50.751 } 00:18:50.751 } 00:18:50.751 } 00:18:50.751 ]' 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:18:50.751 { 00:18:50.751 "name": "Nvme1n1p2", 00:18:50.751 "aliases": [ 00:18:50.751 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:18:50.751 ], 00:18:50.751 "product_name": "GPT Disk", 00:18:50.751 "block_size": 4096, 00:18:50.751 "num_blocks": 655103, 00:18:50.751 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:50.751 "assigned_rate_limits": { 00:18:50.751 "rw_ios_per_sec": 0, 00:18:50.751 "rw_mbytes_per_sec": 0, 00:18:50.751 "r_mbytes_per_sec": 0, 00:18:50.751 "w_mbytes_per_sec": 0 00:18:50.751 }, 00:18:50.751 "claimed": false, 00:18:50.751 "zoned": false, 00:18:50.751 "supported_io_types": { 00:18:50.751 "read": true, 00:18:50.751 "write": true, 00:18:50.751 "unmap": true, 00:18:50.751 "flush": true, 00:18:50.751 "reset": true, 00:18:50.751 "nvme_admin": false, 00:18:50.751 "nvme_io": false, 00:18:50.751 "nvme_io_md": false, 00:18:50.751 "write_zeroes": true, 00:18:50.751 "zcopy": false, 00:18:50.751 "get_zone_info": false, 00:18:50.751 "zone_management": false, 00:18:50.751 "zone_append": false, 00:18:50.751 "compare": true, 00:18:50.751 "compare_and_write": false, 00:18:50.751 "abort": true, 00:18:50.751 "seek_hole": false, 00:18:50.751 "seek_data": false, 00:18:50.751 "copy": true, 00:18:50.751 "nvme_iov_md": false 00:18:50.751 }, 00:18:50.751 "driver_specific": { 00:18:50.751 "gpt": { 00:18:50.751 "base_bdev": "Nvme1n1", 00:18:50.751 "offset_blocks": 655360, 00:18:50.751 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:18:50.751 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:50.751 "partition_name": "SPDK_TEST_second" 00:18:50.751 } 00:18:50.751 } 00:18:50.751 } 00:18:50.751 ]' 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:18:50.751 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67628 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67628 ']' 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67628 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67628 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:51.018 killing process with pid 67628 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67628' 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67628 00:18:51.018 11:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67628 00:18:53.549 00:18:53.549 real 0m3.958s 00:18:53.549 user 0m4.266s 00:18:53.549 sys 0m0.431s 00:18:53.549 11:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.549 11:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:53.549 ************************************ 00:18:53.549 END TEST bdev_gpt_uuid 00:18:53.549 ************************************ 00:18:53.549 11:47:49 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:18:53.549 11:47:49 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:53.549 11:47:49 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:18:53.549 11:47:49 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:53.549 11:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:53.549 11:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:18:53.549 11:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:18:53.549 11:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:18:53.549 11:47:50 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:53.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:53.549 Waiting for block devices as requested 00:18:53.549 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:53.808 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:53.808 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:53.808 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:59.081 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:59.081 11:47:55 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:18:59.081 11:47:55 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:18:59.338 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:18:59.338 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:18:59.338 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:18:59.338 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:18:59.338 11:47:56 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:18:59.338 00:18:59.338 real 1m4.649s 00:18:59.338 user 1m22.642s 00:18:59.338 sys 0m9.389s 00:18:59.338 11:47:56 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.338 11:47:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:59.338 ************************************ 00:18:59.338 END TEST blockdev_nvme_gpt 00:18:59.338 ************************************ 00:18:59.338 11:47:56 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:59.338 11:47:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:59.338 11:47:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.338 11:47:56 -- common/autotest_common.sh@10 -- # set +x 00:18:59.338 ************************************ 00:18:59.338 START TEST nvme 00:18:59.338 ************************************ 00:18:59.338 11:47:56 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:59.338 * Looking for test storage... 00:18:59.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:59.338 11:47:56 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:59.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.467 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.467 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.467 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.467 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.725 11:47:57 nvme -- nvme/nvme.sh@79 -- # uname 00:19:00.725 11:47:57 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:19:00.725 11:47:57 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:19:00.725 11:47:57 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1071 -- # stubpid=68266 00:19:00.725 Waiting for stub to ready for secondary processes... 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68266 ]] 00:19:00.725 11:47:57 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:19:00.725 [2024-07-25 11:47:57.624928] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:00.725 [2024-07-25 11:47:57.625112] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:19:01.659 [2024-07-25 11:47:58.412495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.659 11:47:58 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:01.659 11:47:58 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68266 ]] 00:19:01.659 11:47:58 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:19:01.659 [2024-07-25 11:47:58.636552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.659 [2024-07-25 11:47:58.636718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.659 [2024-07-25 11:47:58.636749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.659 [2024-07-25 11:47:58.658301] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:19:01.659 [2024-07-25 11:47:58.658362] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.659 [2024-07-25 11:47:58.670400] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:19:01.659 [2024-07-25 11:47:58.670578] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:19:01.659 [2024-07-25 11:47:58.673292] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.659 [2024-07-25 11:47:58.673632] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:19:01.659 [2024-07-25 11:47:58.673805] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:19:01.659 [2024-07-25 11:47:58.677188] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.659 [2024-07-25 11:47:58.677470] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:19:01.659 [2024-07-25 11:47:58.677590] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:19:01.659 [2024-07-25 11:47:58.681304] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.659 [2024-07-25 11:47:58.681761] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:19:01.659 [2024-07-25 11:47:58.681910] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:19:01.659 [2024-07-25 11:47:58.682038] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:19:01.659 [2024-07-25 11:47:58.682134] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:19:02.593 11:47:59 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:02.593 done. 00:19:02.593 11:47:59 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:19:02.593 11:47:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:02.593 11:47:59 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:19:02.593 11:47:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.593 11:47:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.593 ************************************ 00:19:02.593 START TEST nvme_reset 00:19:02.593 ************************************ 00:19:02.593 11:47:59 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:02.851 Initializing NVMe Controllers 00:19:02.851 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:02.851 Skipping QEMU NVMe SSD at 0000:00:11.0 00:19:02.851 Skipping QEMU NVMe SSD at 0000:00:13.0 00:19:02.851 Skipping QEMU NVMe SSD at 0000:00:12.0 00:19:02.851 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:03.109 00:19:03.109 real 0m0.299s 00:19:03.109 user 0m0.128s 00:19:03.109 sys 0m0.127s 00:19:03.109 11:47:59 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.109 11:47:59 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:03.109 ************************************ 00:19:03.109 END TEST nvme_reset 00:19:03.109 ************************************ 00:19:03.109 11:47:59 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:03.109 11:47:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:03.109 11:47:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.109 11:47:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:03.109 ************************************ 00:19:03.109 START TEST nvme_identify 00:19:03.109 ************************************ 00:19:03.110 11:47:59 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:19:03.110 11:47:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:03.110 11:47:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:03.110 11:47:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:03.110 11:47:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:03.110 11:47:59 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:03.110 11:47:59 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:19:03.110 11:47:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:03.110 11:47:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:03.110 11:47:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:03.110 11:48:00 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:19:03.110 11:48:00 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:03.110 11:48:00 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:03.371 [2024-07-25 11:48:00.254710] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68299 terminated unexpected 00:19:03.371 ===================================================== 00:19:03.371 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:03.371 ===================================================== 00:19:03.371 Controller Capabilities/Features 00:19:03.371 ================================ 00:19:03.371 Vendor ID: 1b36 00:19:03.371 Subsystem Vendor ID: 1af4 00:19:03.371 Serial Number: 12340 00:19:03.371 Model Number: QEMU NVMe Ctrl 00:19:03.371 Firmware Version: 8.0.0 00:19:03.371 Recommended Arb Burst: 6 00:19:03.371 IEEE OUI Identifier: 00 54 52 00:19:03.371 Multi-path I/O 00:19:03.371 May have multiple subsystem ports: No 00:19:03.371 May have multiple controllers: No 00:19:03.371 Associated with SR-IOV VF: No 00:19:03.371 Max Data Transfer Size: 524288 00:19:03.371 Max Number of Namespaces: 256 00:19:03.371 Max Number of I/O Queues: 64 00:19:03.371 NVMe Specification Version (VS): 1.4 00:19:03.371 NVMe Specification Version (Identify): 1.4 00:19:03.371 Maximum Queue Entries: 2048 00:19:03.371 Contiguous Queues Required: Yes 00:19:03.371 Arbitration Mechanisms Supported 00:19:03.371 Weighted Round Robin: Not Supported 00:19:03.371 Vendor Specific: Not Supported 00:19:03.371 Reset Timeout: 7500 ms 00:19:03.371 Doorbell Stride: 4 bytes 00:19:03.371 NVM Subsystem Reset: Not Supported 00:19:03.371 Command Sets Supported 00:19:03.371 NVM Command Set: Supported 00:19:03.371 Boot Partition: Not Supported 00:19:03.371 Memory Page Size Minimum: 4096 bytes 00:19:03.371 Memory Page Size Maximum: 65536 bytes 00:19:03.371 Persistent Memory Region: Not Supported 00:19:03.371 Optional Asynchronous Events Supported 00:19:03.371 Namespace Attribute Notices: Supported 00:19:03.371 Firmware Activation Notices: Not Supported 00:19:03.371 ANA Change Notices: Not Supported 00:19:03.371 PLE Aggregate Log Change Notices: Not Supported 00:19:03.371 LBA Status Info Alert Notices: Not Supported 00:19:03.371 EGE Aggregate Log Change Notices: Not Supported 00:19:03.371 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.371 Zone Descriptor Change Notices: Not Supported 00:19:03.371 Discovery Log Change Notices: Not Supported 00:19:03.371 Controller Attributes 00:19:03.371 128-bit Host Identifier: Not Supported 00:19:03.371 Non-Operational Permissive Mode: Not Supported 00:19:03.371 NVM Sets: Not Supported 00:19:03.371 Read Recovery Levels: Not Supported 00:19:03.371 Endurance Groups: Not Supported 00:19:03.371 Predictable Latency Mode: Not Supported 00:19:03.371 Traffic Based Keep ALive: Not Supported 00:19:03.371 Namespace Granularity: Not Supported 00:19:03.371 SQ Associations: Not Supported 00:19:03.371 UUID List: Not Supported 00:19:03.371 Multi-Domain Subsystem: Not Supported 00:19:03.371 Fixed Capacity Management: Not Supported 00:19:03.371 Variable Capacity Management: Not Supported 00:19:03.371 Delete Endurance Group: Not Supported 00:19:03.371 Delete NVM Set: Not Supported 00:19:03.371 Extended LBA Formats Supported: Supported 00:19:03.371 Flexible Data Placement Supported: Not Supported 00:19:03.371 00:19:03.371 Controller Memory Buffer Support 00:19:03.371 ================================ 00:19:03.371 Supported: No 00:19:03.371 00:19:03.371 Persistent Memory Region Support 00:19:03.371 ================================ 00:19:03.371 Supported: No 00:19:03.371 00:19:03.371 Admin Command Set Attributes 00:19:03.371 ============================ 00:19:03.371 Security Send/Receive: Not Supported 00:19:03.371 Format NVM: Supported 00:19:03.371 Firmware Activate/Download: Not Supported 00:19:03.371 Namespace Management: Supported 00:19:03.371 Device Self-Test: Not Supported 00:19:03.371 Directives: Supported 00:19:03.371 NVMe-MI: Not Supported 00:19:03.371 Virtualization Management: Not Supported 00:19:03.371 Doorbell Buffer Config: Supported 00:19:03.371 Get LBA Status Capability: Not Supported 00:19:03.371 Command & Feature Lockdown Capability: Not Supported 00:19:03.371 Abort Command Limit: 4 00:19:03.371 Async Event Request Limit: 4 00:19:03.371 Number of Firmware Slots: N/A 00:19:03.371 Firmware Slot 1 Read-Only: N/A 00:19:03.371 Firmware Activation Without Reset: N/A 00:19:03.371 Multiple Update Detection Support: N/A 00:19:03.371 Firmware Update Granularity: No Information Provided 00:19:03.371 Per-Namespace SMART Log: Yes 00:19:03.371 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.371 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:03.371 Command Effects Log Page: Supported 00:19:03.371 Get Log Page Extended Data: Supported 00:19:03.371 Telemetry Log Pages: Not Supported 00:19:03.371 Persistent Event Log Pages: Not Supported 00:19:03.371 Supported Log Pages Log Page: May Support 00:19:03.371 Commands Supported & Effects Log Page: Not Supported 00:19:03.371 Feature Identifiers & Effects Log Page:May Support 00:19:03.371 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.371 Data Area 4 for Telemetry Log: Not Supported 00:19:03.371 Error Log Page Entries Supported: 1 00:19:03.371 Keep Alive: Not Supported 00:19:03.371 00:19:03.371 NVM Command Set Attributes 00:19:03.371 ========================== 00:19:03.371 Submission Queue Entry Size 00:19:03.371 Max: 64 00:19:03.371 Min: 64 00:19:03.371 Completion Queue Entry Size 00:19:03.371 Max: 16 00:19:03.371 Min: 16 00:19:03.371 Number of Namespaces: 256 00:19:03.371 Compare Command: Supported 00:19:03.371 Write Uncorrectable Command: Not Supported 00:19:03.371 Dataset Management Command: Supported 00:19:03.371 Write Zeroes Command: Supported 00:19:03.371 Set Features Save Field: Supported 00:19:03.371 Reservations: Not Supported 00:19:03.371 Timestamp: Supported 00:19:03.371 Copy: Supported 00:19:03.371 Volatile Write Cache: Present 00:19:03.371 Atomic Write Unit (Normal): 1 00:19:03.371 Atomic Write Unit (PFail): 1 00:19:03.371 Atomic Compare & Write Unit: 1 00:19:03.371 Fused Compare & Write: Not Supported 00:19:03.371 Scatter-Gather List 00:19:03.371 SGL Command Set: Supported 00:19:03.371 SGL Keyed: Not Supported 00:19:03.371 SGL Bit Bucket Descriptor: Not Supported 00:19:03.371 SGL Metadata Pointer: Not Supported 00:19:03.371 Oversized SGL: Not Supported 00:19:03.371 SGL Metadata Address: Not Supported 00:19:03.371 SGL Offset: Not Supported 00:19:03.371 Transport SGL Data Block: Not Supported 00:19:03.371 Replay Protected Memory Block: Not Supported 00:19:03.371 00:19:03.371 Firmware Slot Information 00:19:03.371 ========================= 00:19:03.371 Active slot: 1 00:19:03.371 Slot 1 Firmware Revision: 1.0 00:19:03.371 00:19:03.371 00:19:03.371 Commands Supported and Effects 00:19:03.371 ============================== 00:19:03.371 Admin Commands 00:19:03.371 -------------- 00:19:03.371 Delete I/O Submission Queue (00h): Supported 00:19:03.371 Create I/O Submission Queue (01h): Supported 00:19:03.371 Get Log Page (02h): Supported 00:19:03.371 Delete I/O Completion Queue (04h): Supported 00:19:03.371 Create I/O Completion Queue (05h): Supported 00:19:03.371 Identify (06h): Supported 00:19:03.371 Abort (08h): Supported 00:19:03.371 Set Features (09h): Supported 00:19:03.371 Get Features (0Ah): Supported 00:19:03.371 Asynchronous Event Request (0Ch): Supported 00:19:03.371 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.371 Directive Send (19h): Supported 00:19:03.371 Directive Receive (1Ah): Supported 00:19:03.371 Virtualization Management (1Ch): Supported 00:19:03.371 Doorbell Buffer Config (7Ch): Supported 00:19:03.371 Format NVM (80h): Supported LBA-Change 00:19:03.371 I/O Commands 00:19:03.371 ------------ 00:19:03.371 Flush (00h): Supported LBA-Change 00:19:03.371 Write (01h): Supported LBA-Change 00:19:03.371 Read (02h): Supported 00:19:03.371 Compare (05h): Supported 00:19:03.371 Write Zeroes (08h): Supported LBA-Change 00:19:03.371 Dataset Management (09h): Supported LBA-Change 00:19:03.371 Unknown (0Ch): Supported 00:19:03.371 Unknown (12h): Supported 00:19:03.371 Copy (19h): Supported LBA-Change 00:19:03.371 Unknown (1Dh): Supported LBA-Change 00:19:03.371 00:19:03.371 Error Log 00:19:03.371 ========= 00:19:03.371 00:19:03.371 Arbitration 00:19:03.371 =========== 00:19:03.371 Arbitration Burst: no limit 00:19:03.371 00:19:03.371 Power Management 00:19:03.371 ================ 00:19:03.371 Number of Power States: 1 00:19:03.371 Current Power State: Power State #0 00:19:03.371 Power State #0: 00:19:03.371 Max Power: 25.00 W 00:19:03.371 Non-Operational State: Operational 00:19:03.371 Entry Latency: 16 microseconds 00:19:03.371 Exit Latency: 4 microseconds 00:19:03.371 Relative Read Throughput: 0 00:19:03.371 Relative Read Latency: 0 00:19:03.371 Relative Write Throughput: 0 00:19:03.371 Relative Write Latency: 0 00:19:03.371 Idle Power[2024-07-25 11:48:00.256199] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68299 terminated unexpected 00:19:03.371 : Not Reported 00:19:03.371 Active Power: Not Reported 00:19:03.371 Non-Operational Permissive Mode: Not Supported 00:19:03.371 00:19:03.371 Health Information 00:19:03.372 ================== 00:19:03.372 Critical Warnings: 00:19:03.372 Available Spare Space: OK 00:19:03.372 Temperature: OK 00:19:03.372 Device Reliability: OK 00:19:03.372 Read Only: No 00:19:03.372 Volatile Memory Backup: OK 00:19:03.372 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.372 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.372 Available Spare: 0% 00:19:03.372 Available Spare Threshold: 0% 00:19:03.372 Life Percentage Used: 0% 00:19:03.372 Data Units Read: 687 00:19:03.372 Data Units Written: 579 00:19:03.372 Host Read Commands: 34530 00:19:03.372 Host Write Commands: 33568 00:19:03.372 Controller Busy Time: 0 minutes 00:19:03.372 Power Cycles: 0 00:19:03.372 Power On Hours: 0 hours 00:19:03.372 Unsafe Shutdowns: 0 00:19:03.372 Unrecoverable Media Errors: 0 00:19:03.372 Lifetime Error Log Entries: 0 00:19:03.372 Warning Temperature Time: 0 minutes 00:19:03.372 Critical Temperature Time: 0 minutes 00:19:03.372 00:19:03.372 Number of Queues 00:19:03.372 ================ 00:19:03.372 Number of I/O Submission Queues: 64 00:19:03.372 Number of I/O Completion Queues: 64 00:19:03.372 00:19:03.372 ZNS Specific Controller Data 00:19:03.372 ============================ 00:19:03.372 Zone Append Size Limit: 0 00:19:03.372 00:19:03.372 00:19:03.372 Active Namespaces 00:19:03.372 ================= 00:19:03.372 Namespace ID:1 00:19:03.372 Error Recovery Timeout: Unlimited 00:19:03.372 Command Set Identifier: NVM (00h) 00:19:03.372 Deallocate: Supported 00:19:03.372 Deallocated/Unwritten Error: Supported 00:19:03.372 Deallocated Read Value: All 0x00 00:19:03.372 Deallocate in Write Zeroes: Not Supported 00:19:03.372 Deallocated Guard Field: 0xFFFF 00:19:03.372 Flush: Supported 00:19:03.372 Reservation: Not Supported 00:19:03.372 Metadata Transferred as: Separate Metadata Buffer 00:19:03.372 Namespace Sharing Capabilities: Private 00:19:03.372 Size (in LBAs): 1548666 (5GiB) 00:19:03.372 Capacity (in LBAs): 1548666 (5GiB) 00:19:03.372 Utilization (in LBAs): 1548666 (5GiB) 00:19:03.372 Thin Provisioning: Not Supported 00:19:03.372 Per-NS Atomic Units: No 00:19:03.372 Maximum Single Source Range Length: 128 00:19:03.372 Maximum Copy Length: 128 00:19:03.372 Maximum Source Range Count: 128 00:19:03.372 NGUID/EUI64 Never Reused: No 00:19:03.372 Namespace Write Protected: No 00:19:03.372 Number of LBA Formats: 8 00:19:03.372 Current LBA Format: LBA Format #07 00:19:03.372 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.372 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.372 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.372 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.372 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.372 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.372 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.372 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.372 00:19:03.372 NVM Specific Namespace Data 00:19:03.372 =========================== 00:19:03.372 Logical Block Storage Tag Mask: 0 00:19:03.372 Protection Information Capabilities: 00:19:03.372 16b Guard Protection Information Storage Tag Support: No 00:19:03.372 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.372 Storage Tag Check Read Support: No 00:19:03.372 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.372 ===================================================== 00:19:03.372 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:03.372 ===================================================== 00:19:03.372 Controller Capabilities/Features 00:19:03.372 ================================ 00:19:03.372 Vendor ID: 1b36 00:19:03.372 Subsystem Vendor ID: 1af4 00:19:03.372 Serial Number: 12341 00:19:03.372 Model Number: QEMU NVMe Ctrl 00:19:03.372 Firmware Version: 8.0.0 00:19:03.372 Recommended Arb Burst: 6 00:19:03.372 IEEE OUI Identifier: 00 54 52 00:19:03.372 Multi-path I/O 00:19:03.372 May have multiple subsystem ports: No 00:19:03.372 May have multiple controllers: No 00:19:03.372 Associated with SR-IOV VF: No 00:19:03.372 Max Data Transfer Size: 524288 00:19:03.372 Max Number of Namespaces: 256 00:19:03.372 Max Number of I/O Queues: 64 00:19:03.372 NVMe Specification Version (VS): 1.4 00:19:03.372 NVMe Specification Version (Identify): 1.4 00:19:03.372 Maximum Queue Entries: 2048 00:19:03.372 Contiguous Queues Required: Yes 00:19:03.372 Arbitration Mechanisms Supported 00:19:03.372 Weighted Round Robin: Not Supported 00:19:03.372 Vendor Specific: Not Supported 00:19:03.372 Reset Timeout: 7500 ms 00:19:03.372 Doorbell Stride: 4 bytes 00:19:03.372 NVM Subsystem Reset: Not Supported 00:19:03.372 Command Sets Supported 00:19:03.372 NVM Command Set: Supported 00:19:03.372 Boot Partition: Not Supported 00:19:03.372 Memory Page Size Minimum: 4096 bytes 00:19:03.372 Memory Page Size Maximum: 65536 bytes 00:19:03.372 Persistent Memory Region: Not Supported 00:19:03.372 Optional Asynchronous Events Supported 00:19:03.372 Namespace Attribute Notices: Supported 00:19:03.372 Firmware Activation Notices: Not Supported 00:19:03.372 ANA Change Notices: Not Supported 00:19:03.372 PLE Aggregate Log Change Notices: Not Supported 00:19:03.372 LBA Status Info Alert Notices: Not Supported 00:19:03.372 EGE Aggregate Log Change Notices: Not Supported 00:19:03.372 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.372 Zone Descriptor Change Notices: Not Supported 00:19:03.372 Discovery Log Change Notices: Not Supported 00:19:03.372 Controller Attributes 00:19:03.372 128-bit Host Identifier: Not Supported 00:19:03.372 Non-Operational Permissive Mode: Not Supported 00:19:03.372 NVM Sets: Not Supported 00:19:03.372 Read Recovery Levels: Not Supported 00:19:03.372 Endurance Groups: Not Supported 00:19:03.372 Predictable Latency Mode: Not Supported 00:19:03.372 Traffic Based Keep ALive: Not Supported 00:19:03.372 Namespace Granularity: Not Supported 00:19:03.372 SQ Associations: Not Supported 00:19:03.372 UUID List: Not Supported 00:19:03.372 Multi-Domain Subsystem: Not Supported 00:19:03.372 Fixed Capacity Management: Not Supported 00:19:03.372 Variable Capacity Management: Not Supported 00:19:03.372 Delete Endurance Group: Not Supported 00:19:03.372 Delete NVM Set: Not Supported 00:19:03.372 Extended LBA Formats Supported: Supported 00:19:03.372 Flexible Data Placement Supported: Not Supported 00:19:03.372 00:19:03.372 Controller Memory Buffer Support 00:19:03.372 ================================ 00:19:03.372 Supported: No 00:19:03.372 00:19:03.372 Persistent Memory Region Support 00:19:03.372 ================================ 00:19:03.372 Supported: No 00:19:03.372 00:19:03.372 Admin Command Set Attributes 00:19:03.372 ============================ 00:19:03.372 Security Send/Receive: Not Supported 00:19:03.372 Format NVM: Supported 00:19:03.372 Firmware Activate/Download: Not Supported 00:19:03.372 Namespace Management: Supported 00:19:03.372 Device Self-Test: Not Supported 00:19:03.372 Directives: Supported 00:19:03.372 NVMe-MI: Not Supported 00:19:03.372 Virtualization Management: Not Supported 00:19:03.372 Doorbell Buffer Config: Supported 00:19:03.372 Get LBA Status Capability: Not Supported 00:19:03.372 Command & Feature Lockdown Capability: Not Supported 00:19:03.372 Abort Command Limit: 4 00:19:03.372 Async Event Request Limit: 4 00:19:03.372 Number of Firmware Slots: N/A 00:19:03.372 Firmware Slot 1 Read-Only: N/A 00:19:03.372 Firmware Activation Without Reset: N/A 00:19:03.372 Multiple Update Detection Support: N/A 00:19:03.372 Firmware Update Granularity: No Information Provided 00:19:03.372 Per-Namespace SMART Log: Yes 00:19:03.372 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.372 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:03.372 Command Effects Log Page: Supported 00:19:03.372 Get Log Page Extended Data: Supported 00:19:03.372 Telemetry Log Pages: Not Supported 00:19:03.373 Persistent Event Log Pages: Not Supported 00:19:03.373 Supported Log Pages Log Page: May Support 00:19:03.373 Commands Supported & Effects Log Page: Not Supported 00:19:03.373 Feature Identifiers & Effects Log Page:May Support 00:19:03.373 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.373 Data Area 4 for Telemetry Log: Not Supported 00:19:03.373 Error Log Page Entries Supported: 1 00:19:03.373 Keep Alive: Not Supported 00:19:03.373 00:19:03.373 NVM Command Set Attributes 00:19:03.373 ========================== 00:19:03.373 Submission Queue Entry Size 00:19:03.373 Max: 64 00:19:03.373 Min: 64 00:19:03.373 Completion Queue Entry Size 00:19:03.373 Max: 16 00:19:03.373 Min: 16 00:19:03.373 Number of Namespaces: 256 00:19:03.373 Compare Command: Supported 00:19:03.373 Write Uncorrectable Command: Not Supported 00:19:03.373 Dataset Management Command: Supported 00:19:03.373 Write Zeroes Command: Supported 00:19:03.373 Set Features Save Field: Supported 00:19:03.373 Reservations: Not Supported 00:19:03.373 Timestamp: Supported 00:19:03.373 Copy: Supported 00:19:03.373 Volatile Write Cache: Present 00:19:03.373 Atomic Write Unit (Normal): 1 00:19:03.373 Atomic Write Unit (PFail): 1 00:19:03.373 Atomic Compare & Write Unit: 1 00:19:03.373 Fused Compare & Write: Not Supported 00:19:03.373 Scatter-Gather List 00:19:03.373 SGL Command Set: Supported 00:19:03.373 SGL Keyed: Not Supported 00:19:03.373 SGL Bit Bucket Descriptor: Not Supported 00:19:03.373 SGL Metadata Pointer: Not Supported 00:19:03.373 Oversized SGL: Not Supported 00:19:03.373 SGL Metadata Address: Not Supported 00:19:03.373 SGL Offset: Not Supported 00:19:03.373 Transport SGL Data Block: Not Supported 00:19:03.373 Replay Protected Memory Block: Not Supported 00:19:03.373 00:19:03.373 Firmware Slot Information 00:19:03.373 ========================= 00:19:03.373 Active slot: 1 00:19:03.373 Slot 1 Firmware Revision: 1.0 00:19:03.373 00:19:03.373 00:19:03.373 Commands Supported and Effects 00:19:03.373 ============================== 00:19:03.373 Admin Commands 00:19:03.373 -------------- 00:19:03.373 Delete I/O Submission Queue (00h): Supported 00:19:03.373 Create I/O Submission Queue (01h): Supported 00:19:03.373 Get Log Page (02h): Supported 00:19:03.373 Delete I/O Completion Queue (04h): Supported 00:19:03.373 Create I/O Completion Queue (05h): Supported 00:19:03.373 Identify (06h): Supported 00:19:03.373 Abort (08h): Supported 00:19:03.373 Set Features (09h): Supported 00:19:03.373 Get Features (0Ah): Supported 00:19:03.373 Asynchronous Event Request (0Ch): Supported 00:19:03.373 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.373 Directive Send (19h): Supported 00:19:03.373 Directive Receive (1Ah): Supported 00:19:03.373 Virtualization Management (1Ch): Supported 00:19:03.373 Doorbell Buffer Config (7Ch): Supported 00:19:03.373 Format NVM (80h): Supported LBA-Change 00:19:03.373 I/O Commands 00:19:03.373 ------------ 00:19:03.373 Flush (00h): Supported LBA-Change 00:19:03.373 Write (01h): Supported LBA-Change 00:19:03.373 Read (02h): Supported 00:19:03.373 Compare (05h): Supported 00:19:03.373 Write Zeroes (08h): Supported LBA-Change 00:19:03.373 Dataset Management (09h): Supported LBA-Change 00:19:03.373 Unknown (0Ch): Supported 00:19:03.373 Unknown (12h): Supported 00:19:03.373 Copy (19h): Supported LBA-Change 00:19:03.373 Unknown (1Dh): Supported LBA-Change 00:19:03.373 00:19:03.373 Error Log 00:19:03.373 ========= 00:19:03.373 00:19:03.373 Arbitration 00:19:03.373 =========== 00:19:03.373 Arbitration Burst: no limit 00:19:03.373 00:19:03.373 Power Management 00:19:03.373 ================ 00:19:03.373 Number of Power States: 1 00:19:03.373 Current Power State: Power State #0 00:19:03.373 Power State #0: 00:19:03.373 Max Power: 25.00 W 00:19:03.373 Non-Operational State: Operational 00:19:03.373 Entry Latency: 16 microseconds 00:19:03.373 Exit Latency: 4 microseconds 00:19:03.373 Relative Read Throughput: 0 00:19:03.373 Relative Read Latency: 0 00:19:03.373 Relative Write Throughput: 0 00:19:03.373 Relative Write Latency: 0 00:19:03.373 Idle Power: Not Reported 00:19:03.373 Active Power: Not Reported 00:19:03.373 Non-Operational Permissive Mode: Not Supported 00:19:03.373 00:19:03.373 Health Information 00:19:03.373 ================== 00:19:03.373 Critical Warnings: 00:19:03.373 Available Spare Space: OK 00:19:03.373 Temperature: [2024-07-25 11:48:00.257338] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68299 terminated unexpected 00:19:03.373 OK 00:19:03.373 Device Reliability: OK 00:19:03.373 Read Only: No 00:19:03.373 Volatile Memory Backup: OK 00:19:03.373 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.373 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.373 Available Spare: 0% 00:19:03.373 Available Spare Threshold: 0% 00:19:03.373 Life Percentage Used: 0% 00:19:03.373 Data Units Read: 1039 00:19:03.373 Data Units Written: 830 00:19:03.373 Host Read Commands: 51433 00:19:03.373 Host Write Commands: 48608 00:19:03.373 Controller Busy Time: 0 minutes 00:19:03.373 Power Cycles: 0 00:19:03.373 Power On Hours: 0 hours 00:19:03.373 Unsafe Shutdowns: 0 00:19:03.373 Unrecoverable Media Errors: 0 00:19:03.373 Lifetime Error Log Entries: 0 00:19:03.373 Warning Temperature Time: 0 minutes 00:19:03.373 Critical Temperature Time: 0 minutes 00:19:03.373 00:19:03.373 Number of Queues 00:19:03.373 ================ 00:19:03.373 Number of I/O Submission Queues: 64 00:19:03.373 Number of I/O Completion Queues: 64 00:19:03.373 00:19:03.373 ZNS Specific Controller Data 00:19:03.373 ============================ 00:19:03.373 Zone Append Size Limit: 0 00:19:03.373 00:19:03.373 00:19:03.373 Active Namespaces 00:19:03.373 ================= 00:19:03.373 Namespace ID:1 00:19:03.373 Error Recovery Timeout: Unlimited 00:19:03.373 Command Set Identifier: NVM (00h) 00:19:03.373 Deallocate: Supported 00:19:03.373 Deallocated/Unwritten Error: Supported 00:19:03.373 Deallocated Read Value: All 0x00 00:19:03.373 Deallocate in Write Zeroes: Not Supported 00:19:03.373 Deallocated Guard Field: 0xFFFF 00:19:03.373 Flush: Supported 00:19:03.373 Reservation: Not Supported 00:19:03.373 Namespace Sharing Capabilities: Private 00:19:03.373 Size (in LBAs): 1310720 (5GiB) 00:19:03.373 Capacity (in LBAs): 1310720 (5GiB) 00:19:03.373 Utilization (in LBAs): 1310720 (5GiB) 00:19:03.373 Thin Provisioning: Not Supported 00:19:03.373 Per-NS Atomic Units: No 00:19:03.373 Maximum Single Source Range Length: 128 00:19:03.373 Maximum Copy Length: 128 00:19:03.373 Maximum Source Range Count: 128 00:19:03.373 NGUID/EUI64 Never Reused: No 00:19:03.373 Namespace Write Protected: No 00:19:03.373 Number of LBA Formats: 8 00:19:03.373 Current LBA Format: LBA Format #04 00:19:03.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.373 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.373 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.373 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.373 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.373 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.373 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.373 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.373 00:19:03.373 NVM Specific Namespace Data 00:19:03.373 =========================== 00:19:03.373 Logical Block Storage Tag Mask: 0 00:19:03.373 Protection Information Capabilities: 00:19:03.373 16b Guard Protection Information Storage Tag Support: No 00:19:03.373 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.373 Storage Tag Check Read Support: No 00:19:03.373 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.373 ===================================================== 00:19:03.373 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:03.373 ===================================================== 00:19:03.374 Controller Capabilities/Features 00:19:03.374 ================================ 00:19:03.374 Vendor ID: 1b36 00:19:03.374 Subsystem Vendor ID: 1af4 00:19:03.374 Serial Number: 12343 00:19:03.374 Model Number: QEMU NVMe Ctrl 00:19:03.374 Firmware Version: 8.0.0 00:19:03.374 Recommended Arb Burst: 6 00:19:03.374 IEEE OUI Identifier: 00 54 52 00:19:03.374 Multi-path I/O 00:19:03.374 May have multiple subsystem ports: No 00:19:03.374 May have multiple controllers: Yes 00:19:03.374 Associated with SR-IOV VF: No 00:19:03.374 Max Data Transfer Size: 524288 00:19:03.374 Max Number of Namespaces: 256 00:19:03.374 Max Number of I/O Queues: 64 00:19:03.374 NVMe Specification Version (VS): 1.4 00:19:03.374 NVMe Specification Version (Identify): 1.4 00:19:03.374 Maximum Queue Entries: 2048 00:19:03.374 Contiguous Queues Required: Yes 00:19:03.374 Arbitration Mechanisms Supported 00:19:03.374 Weighted Round Robin: Not Supported 00:19:03.374 Vendor Specific: Not Supported 00:19:03.374 Reset Timeout: 7500 ms 00:19:03.374 Doorbell Stride: 4 bytes 00:19:03.374 NVM Subsystem Reset: Not Supported 00:19:03.374 Command Sets Supported 00:19:03.374 NVM Command Set: Supported 00:19:03.374 Boot Partition: Not Supported 00:19:03.374 Memory Page Size Minimum: 4096 bytes 00:19:03.374 Memory Page Size Maximum: 65536 bytes 00:19:03.374 Persistent Memory Region: Not Supported 00:19:03.374 Optional Asynchronous Events Supported 00:19:03.374 Namespace Attribute Notices: Supported 00:19:03.374 Firmware Activation Notices: Not Supported 00:19:03.374 ANA Change Notices: Not Supported 00:19:03.374 PLE Aggregate Log Change Notices: Not Supported 00:19:03.374 LBA Status Info Alert Notices: Not Supported 00:19:03.374 EGE Aggregate Log Change Notices: Not Supported 00:19:03.374 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.374 Zone Descriptor Change Notices: Not Supported 00:19:03.374 Discovery Log Change Notices: Not Supported 00:19:03.374 Controller Attributes 00:19:03.374 128-bit Host Identifier: Not Supported 00:19:03.374 Non-Operational Permissive Mode: Not Supported 00:19:03.374 NVM Sets: Not Supported 00:19:03.374 Read Recovery Levels: Not Supported 00:19:03.374 Endurance Groups: Supported 00:19:03.374 Predictable Latency Mode: Not Supported 00:19:03.374 Traffic Based Keep ALive: Not Supported 00:19:03.374 Namespace Granularity: Not Supported 00:19:03.374 SQ Associations: Not Supported 00:19:03.374 UUID List: Not Supported 00:19:03.374 Multi-Domain Subsystem: Not Supported 00:19:03.374 Fixed Capacity Management: Not Supported 00:19:03.374 Variable Capacity Management: Not Supported 00:19:03.374 Delete Endurance Group: Not Supported 00:19:03.374 Delete NVM Set: Not Supported 00:19:03.374 Extended LBA Formats Supported: Supported 00:19:03.374 Flexible Data Placement Supported: Supported 00:19:03.374 00:19:03.374 Controller Memory Buffer Support 00:19:03.374 ================================ 00:19:03.374 Supported: No 00:19:03.374 00:19:03.374 Persistent Memory Region Support 00:19:03.374 ================================ 00:19:03.374 Supported: No 00:19:03.374 00:19:03.374 Admin Command Set Attributes 00:19:03.374 ============================ 00:19:03.374 Security Send/Receive: Not Supported 00:19:03.374 Format NVM: Supported 00:19:03.374 Firmware Activate/Download: Not Supported 00:19:03.374 Namespace Management: Supported 00:19:03.374 Device Self-Test: Not Supported 00:19:03.374 Directives: Supported 00:19:03.374 NVMe-MI: Not Supported 00:19:03.374 Virtualization Management: Not Supported 00:19:03.374 Doorbell Buffer Config: Supported 00:19:03.374 Get LBA Status Capability: Not Supported 00:19:03.374 Command & Feature Lockdown Capability: Not Supported 00:19:03.374 Abort Command Limit: 4 00:19:03.374 Async Event Request Limit: 4 00:19:03.374 Number of Firmware Slots: N/A 00:19:03.374 Firmware Slot 1 Read-Only: N/A 00:19:03.374 Firmware Activation Without Reset: N/A 00:19:03.374 Multiple Update Detection Support: N/A 00:19:03.374 Firmware Update Granularity: No Information Provided 00:19:03.374 Per-Namespace SMART Log: Yes 00:19:03.374 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.374 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:03.374 Command Effects Log Page: Supported 00:19:03.374 Get Log Page Extended Data: Supported 00:19:03.374 Telemetry Log Pages: Not Supported 00:19:03.374 Persistent Event Log Pages: Not Supported 00:19:03.374 Supported Log Pages Log Page: May Support 00:19:03.374 Commands Supported & Effects Log Page: Not Supported 00:19:03.374 Feature Identifiers & Effects Log Page:May Support 00:19:03.374 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.374 Data Area 4 for Telemetry Log: Not Supported 00:19:03.374 Error Log Page Entries Supported: 1 00:19:03.374 Keep Alive: Not Supported 00:19:03.374 00:19:03.374 NVM Command Set Attributes 00:19:03.374 ========================== 00:19:03.374 Submission Queue Entry Size 00:19:03.374 Max: 64 00:19:03.374 Min: 64 00:19:03.374 Completion Queue Entry Size 00:19:03.374 Max: 16 00:19:03.374 Min: 16 00:19:03.374 Number of Namespaces: 256 00:19:03.374 Compare Command: Supported 00:19:03.374 Write Uncorrectable Command: Not Supported 00:19:03.374 Dataset Management Command: Supported 00:19:03.374 Write Zeroes Command: Supported 00:19:03.374 Set Features Save Field: Supported 00:19:03.374 Reservations: Not Supported 00:19:03.374 Timestamp: Supported 00:19:03.374 Copy: Supported 00:19:03.374 Volatile Write Cache: Present 00:19:03.374 Atomic Write Unit (Normal): 1 00:19:03.374 Atomic Write Unit (PFail): 1 00:19:03.374 Atomic Compare & Write Unit: 1 00:19:03.374 Fused Compare & Write: Not Supported 00:19:03.374 Scatter-Gather List 00:19:03.374 SGL Command Set: Supported 00:19:03.374 SGL Keyed: Not Supported 00:19:03.374 SGL Bit Bucket Descriptor: Not Supported 00:19:03.374 SGL Metadata Pointer: Not Supported 00:19:03.374 Oversized SGL: Not Supported 00:19:03.374 SGL Metadata Address: Not Supported 00:19:03.374 SGL Offset: Not Supported 00:19:03.374 Transport SGL Data Block: Not Supported 00:19:03.374 Replay Protected Memory Block: Not Supported 00:19:03.374 00:19:03.374 Firmware Slot Information 00:19:03.374 ========================= 00:19:03.374 Active slot: 1 00:19:03.374 Slot 1 Firmware Revision: 1.0 00:19:03.374 00:19:03.374 00:19:03.374 Commands Supported and Effects 00:19:03.374 ============================== 00:19:03.374 Admin Commands 00:19:03.374 -------------- 00:19:03.374 Delete I/O Submission Queue (00h): Supported 00:19:03.374 Create I/O Submission Queue (01h): Supported 00:19:03.374 Get Log Page (02h): Supported 00:19:03.374 Delete I/O Completion Queue (04h): Supported 00:19:03.374 Create I/O Completion Queue (05h): Supported 00:19:03.374 Identify (06h): Supported 00:19:03.374 Abort (08h): Supported 00:19:03.374 Set Features (09h): Supported 00:19:03.374 Get Features (0Ah): Supported 00:19:03.374 Asynchronous Event Request (0Ch): Supported 00:19:03.374 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.374 Directive Send (19h): Supported 00:19:03.374 Directive Receive (1Ah): Supported 00:19:03.374 Virtualization Management (1Ch): Supported 00:19:03.374 Doorbell Buffer Config (7Ch): Supported 00:19:03.374 Format NVM (80h): Supported LBA-Change 00:19:03.374 I/O Commands 00:19:03.374 ------------ 00:19:03.374 Flush (00h): Supported LBA-Change 00:19:03.374 Write (01h): Supported LBA-Change 00:19:03.374 Read (02h): Supported 00:19:03.374 Compare (05h): Supported 00:19:03.374 Write Zeroes (08h): Supported LBA-Change 00:19:03.374 Dataset Management (09h): Supported LBA-Change 00:19:03.374 Unknown (0Ch): Supported 00:19:03.374 Unknown (12h): Supported 00:19:03.374 Copy (19h): Supported LBA-Change 00:19:03.374 Unknown (1Dh): Supported LBA-Change 00:19:03.374 00:19:03.374 Error Log 00:19:03.374 ========= 00:19:03.374 00:19:03.374 Arbitration 00:19:03.374 =========== 00:19:03.374 Arbitration Burst: no limit 00:19:03.374 00:19:03.374 Power Management 00:19:03.374 ================ 00:19:03.374 Number of Power States: 1 00:19:03.374 Current Power State: Power State #0 00:19:03.374 Power State #0: 00:19:03.374 Max Power: 25.00 W 00:19:03.374 Non-Operational State: Operational 00:19:03.374 Entry Latency: 16 microseconds 00:19:03.374 Exit Latency: 4 microseconds 00:19:03.375 Relative Read Throughput: 0 00:19:03.375 Relative Read Latency: 0 00:19:03.375 Relative Write Throughput: 0 00:19:03.375 Relative Write Latency: 0 00:19:03.375 Idle Power: Not Reported 00:19:03.375 Active Power: Not Reported 00:19:03.375 Non-Operational Permissive Mode: Not Supported 00:19:03.375 00:19:03.375 Health Information 00:19:03.375 ================== 00:19:03.375 Critical Warnings: 00:19:03.375 Available Spare Space: OK 00:19:03.375 Temperature: OK 00:19:03.375 Device Reliability: OK 00:19:03.375 Read Only: No 00:19:03.375 Volatile Memory Backup: OK 00:19:03.375 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.375 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.375 Available Spare: 0% 00:19:03.375 Available Spare Threshold: 0% 00:19:03.375 Life Percentage Used: 0% 00:19:03.375 Data Units Read: 788 00:19:03.375 Data Units Written: 682 00:19:03.375 Host Read Commands: 35832 00:19:03.375 Host Write Commands: 34422 00:19:03.375 Controller Busy Time: 0 minutes 00:19:03.375 Power Cycles: 0 00:19:03.375 Power On Hours: 0 hours 00:19:03.375 Unsafe Shutdowns: 0 00:19:03.375 Unrecoverable Media Errors: 0 00:19:03.375 Lifetime Error Log Entries: 0 00:19:03.375 Warning Temperature Time: 0 minutes 00:19:03.375 Critical Temperature Time: 0 minutes 00:19:03.375 00:19:03.375 Number of Queues 00:19:03.375 ================ 00:19:03.375 Number of I/O Submission Queues: 64 00:19:03.375 Number of I/O Completion Queues: 64 00:19:03.375 00:19:03.375 ZNS Specific Controller Data 00:19:03.375 ============================ 00:19:03.375 Zone Append Size Limit: 0 00:19:03.375 00:19:03.375 00:19:03.375 Active Namespaces 00:19:03.375 ================= 00:19:03.375 Namespace ID:1 00:19:03.375 Error Recovery Timeout: Unlimited 00:19:03.375 Command Set Identifier: NVM (00h) 00:19:03.375 Deallocate: Supported 00:19:03.375 Deallocated/Unwritten Error: Supported 00:19:03.375 Deallocated Read Value: All 0x00 00:19:03.375 Deallocate in Write Zeroes: Not Supported 00:19:03.375 Deallocated Guard Field: 0xFFFF 00:19:03.375 Flush: Supported 00:19:03.375 Reservation: Not Supported 00:19:03.375 Namespace Sharing Capabilities: Multiple Controllers 00:19:03.375 Size (in LBAs): 262144 (1GiB) 00:19:03.375 Capacity (in LBAs): 262144 (1GiB) 00:19:03.375 Utilization (in LBAs): 262144 (1GiB) 00:19:03.375 Thin Provisioning: Not Supported 00:19:03.375 Per-NS Atomic Units: No 00:19:03.375 Maximum Single Source Range Length: 128 00:19:03.375 Maximum Copy Length: 128 00:19:03.375 Maximum Source Range Count: 128 00:19:03.375 NGUID/EUI64 Never Reused: No 00:19:03.375 Namespace Write Protected: No 00:19:03.375 Endurance group ID: 1 00:19:03.375 Number of LBA Formats: 8 00:19:03.375 Current LBA Format: LBA Format #04 00:19:03.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.375 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.375 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.375 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.375 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.375 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.375 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.375 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.375 00:19:03.375 Get Feature FDP: 00:19:03.375 ================ 00:19:03.375 Enabled: Yes 00:19:03.375 FDP configuration index: 0 00:19:03.375 00:19:03.375 FDP configurations log page 00:19:03.375 =========================== 00:19:03.375 Number of FDP configurations: 1 00:19:03.375 Version: 0 00:19:03.375 Size: 112 00:19:03.375 FDP Configuration Descriptor: 0 00:19:03.375 Descriptor Size: 96 00:19:03.375 Reclaim Group Identifier format: 2 00:19:03.375 FDP Volatile Write Cache: Not Present 00:19:03.375 FDP Configuration: Valid 00:19:03.375 Vendor Specific Size: 0 00:19:03.375 Number of Reclaim Groups: 2 00:19:03.375 Number of Recalim Unit Handles: 8 00:19:03.375 Max Placement Identifiers: 128 00:19:03.375 Number of Namespaces Suppprted: 256 00:19:03.375 Reclaim unit Nominal Size: 6000000 bytes 00:19:03.375 Estimated Reclaim Unit Time Limit: Not Reported 00:19:03.375 RUH Desc #000: RUH Type: Initially Isolated 00:19:03.375 RUH Desc #001: RUH Type: Initially Isolated 00:19:03.375 RUH Desc #002: RUH Type: Initially Isolated 00:19:03.375 RUH Desc #003: RUH Type: Initially Isolated 00:19:03.375 RUH Desc #004: RUH Type: Initially Isolated 00:19:03.375 RUH Desc #005: RUH Type: Initially Isolated 00:19:03.375 RUH Desc #006: RUH Type: Initially Isolated 00:19:03.375 RUH Desc #007: RUH Type: Initially Isolated 00:19:03.375 00:19:03.375 FDP reclaim unit handle usage log page 00:19:03.375 ====================================== 00:19:03.375 Number of Reclaim Unit Handles: 8 00:19:03.375 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:03.375 RUH Usage Desc #001: RUH Attributes: Unused 00:19:03.375 RUH Usage Desc #002: RUH Attributes: Unused 00:19:03.375 RUH Usage Desc #003: RUH Attributes: Unused 00:19:03.375 RUH Usage Desc #004: RUH Attributes: Unused 00:19:03.375 RUH Usage Desc #005: RUH Attributes: Unused 00:19:03.375 RUH Usage Desc #006: RUH Attributes: Unused 00:19:03.375 RUH Usage Desc #007: RUH Attributes: Unused 00:19:03.375 00:19:03.375 FDP statistics log page 00:19:03.375 ======================= 00:19:03.375 Host bytes with metadata written: 428580864 00:19:03.375 Medi[2024-07-25 11:48:00.259252] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68299 terminated unexpected 00:19:03.375 a bytes with metadata written: 428625920 00:19:03.375 Media bytes erased: 0 00:19:03.375 00:19:03.375 FDP events log page 00:19:03.375 =================== 00:19:03.375 Number of FDP events: 0 00:19:03.375 00:19:03.375 NVM Specific Namespace Data 00:19:03.375 =========================== 00:19:03.375 Logical Block Storage Tag Mask: 0 00:19:03.375 Protection Information Capabilities: 00:19:03.375 16b Guard Protection Information Storage Tag Support: No 00:19:03.375 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.375 Storage Tag Check Read Support: No 00:19:03.375 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.375 ===================================================== 00:19:03.375 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:03.375 ===================================================== 00:19:03.375 Controller Capabilities/Features 00:19:03.375 ================================ 00:19:03.375 Vendor ID: 1b36 00:19:03.375 Subsystem Vendor ID: 1af4 00:19:03.375 Serial Number: 12342 00:19:03.375 Model Number: QEMU NVMe Ctrl 00:19:03.375 Firmware Version: 8.0.0 00:19:03.375 Recommended Arb Burst: 6 00:19:03.375 IEEE OUI Identifier: 00 54 52 00:19:03.375 Multi-path I/O 00:19:03.375 May have multiple subsystem ports: No 00:19:03.375 May have multiple controllers: No 00:19:03.375 Associated with SR-IOV VF: No 00:19:03.375 Max Data Transfer Size: 524288 00:19:03.375 Max Number of Namespaces: 256 00:19:03.375 Max Number of I/O Queues: 64 00:19:03.375 NVMe Specification Version (VS): 1.4 00:19:03.375 NVMe Specification Version (Identify): 1.4 00:19:03.375 Maximum Queue Entries: 2048 00:19:03.375 Contiguous Queues Required: Yes 00:19:03.375 Arbitration Mechanisms Supported 00:19:03.375 Weighted Round Robin: Not Supported 00:19:03.375 Vendor Specific: Not Supported 00:19:03.375 Reset Timeout: 7500 ms 00:19:03.375 Doorbell Stride: 4 bytes 00:19:03.376 NVM Subsystem Reset: Not Supported 00:19:03.376 Command Sets Supported 00:19:03.376 NVM Command Set: Supported 00:19:03.376 Boot Partition: Not Supported 00:19:03.376 Memory Page Size Minimum: 4096 bytes 00:19:03.376 Memory Page Size Maximum: 65536 bytes 00:19:03.376 Persistent Memory Region: Not Supported 00:19:03.376 Optional Asynchronous Events Supported 00:19:03.376 Namespace Attribute Notices: Supported 00:19:03.376 Firmware Activation Notices: Not Supported 00:19:03.376 ANA Change Notices: Not Supported 00:19:03.376 PLE Aggregate Log Change Notices: Not Supported 00:19:03.376 LBA Status Info Alert Notices: Not Supported 00:19:03.376 EGE Aggregate Log Change Notices: Not Supported 00:19:03.376 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.376 Zone Descriptor Change Notices: Not Supported 00:19:03.376 Discovery Log Change Notices: Not Supported 00:19:03.376 Controller Attributes 00:19:03.376 128-bit Host Identifier: Not Supported 00:19:03.376 Non-Operational Permissive Mode: Not Supported 00:19:03.376 NVM Sets: Not Supported 00:19:03.376 Read Recovery Levels: Not Supported 00:19:03.376 Endurance Groups: Not Supported 00:19:03.376 Predictable Latency Mode: Not Supported 00:19:03.376 Traffic Based Keep ALive: Not Supported 00:19:03.376 Namespace Granularity: Not Supported 00:19:03.376 SQ Associations: Not Supported 00:19:03.376 UUID List: Not Supported 00:19:03.376 Multi-Domain Subsystem: Not Supported 00:19:03.376 Fixed Capacity Management: Not Supported 00:19:03.376 Variable Capacity Management: Not Supported 00:19:03.376 Delete Endurance Group: Not Supported 00:19:03.376 Delete NVM Set: Not Supported 00:19:03.376 Extended LBA Formats Supported: Supported 00:19:03.376 Flexible Data Placement Supported: Not Supported 00:19:03.376 00:19:03.376 Controller Memory Buffer Support 00:19:03.376 ================================ 00:19:03.376 Supported: No 00:19:03.376 00:19:03.376 Persistent Memory Region Support 00:19:03.376 ================================ 00:19:03.376 Supported: No 00:19:03.376 00:19:03.376 Admin Command Set Attributes 00:19:03.376 ============================ 00:19:03.376 Security Send/Receive: Not Supported 00:19:03.376 Format NVM: Supported 00:19:03.376 Firmware Activate/Download: Not Supported 00:19:03.376 Namespace Management: Supported 00:19:03.376 Device Self-Test: Not Supported 00:19:03.376 Directives: Supported 00:19:03.376 NVMe-MI: Not Supported 00:19:03.376 Virtualization Management: Not Supported 00:19:03.376 Doorbell Buffer Config: Supported 00:19:03.376 Get LBA Status Capability: Not Supported 00:19:03.376 Command & Feature Lockdown Capability: Not Supported 00:19:03.376 Abort Command Limit: 4 00:19:03.376 Async Event Request Limit: 4 00:19:03.376 Number of Firmware Slots: N/A 00:19:03.376 Firmware Slot 1 Read-Only: N/A 00:19:03.376 Firmware Activation Without Reset: N/A 00:19:03.376 Multiple Update Detection Support: N/A 00:19:03.376 Firmware Update Granularity: No Information Provided 00:19:03.376 Per-Namespace SMART Log: Yes 00:19:03.376 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.376 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:03.376 Command Effects Log Page: Supported 00:19:03.376 Get Log Page Extended Data: Supported 00:19:03.376 Telemetry Log Pages: Not Supported 00:19:03.376 Persistent Event Log Pages: Not Supported 00:19:03.376 Supported Log Pages Log Page: May Support 00:19:03.376 Commands Supported & Effects Log Page: Not Supported 00:19:03.376 Feature Identifiers & Effects Log Page:May Support 00:19:03.376 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.376 Data Area 4 for Telemetry Log: Not Supported 00:19:03.376 Error Log Page Entries Supported: 1 00:19:03.376 Keep Alive: Not Supported 00:19:03.376 00:19:03.376 NVM Command Set Attributes 00:19:03.376 ========================== 00:19:03.376 Submission Queue Entry Size 00:19:03.376 Max: 64 00:19:03.376 Min: 64 00:19:03.376 Completion Queue Entry Size 00:19:03.376 Max: 16 00:19:03.376 Min: 16 00:19:03.376 Number of Namespaces: 256 00:19:03.376 Compare Command: Supported 00:19:03.376 Write Uncorrectable Command: Not Supported 00:19:03.376 Dataset Management Command: Supported 00:19:03.376 Write Zeroes Command: Supported 00:19:03.376 Set Features Save Field: Supported 00:19:03.376 Reservations: Not Supported 00:19:03.376 Timestamp: Supported 00:19:03.376 Copy: Supported 00:19:03.376 Volatile Write Cache: Present 00:19:03.376 Atomic Write Unit (Normal): 1 00:19:03.376 Atomic Write Unit (PFail): 1 00:19:03.376 Atomic Compare & Write Unit: 1 00:19:03.376 Fused Compare & Write: Not Supported 00:19:03.376 Scatter-Gather List 00:19:03.376 SGL Command Set: Supported 00:19:03.376 SGL Keyed: Not Supported 00:19:03.376 SGL Bit Bucket Descriptor: Not Supported 00:19:03.376 SGL Metadata Pointer: Not Supported 00:19:03.376 Oversized SGL: Not Supported 00:19:03.376 SGL Metadata Address: Not Supported 00:19:03.376 SGL Offset: Not Supported 00:19:03.376 Transport SGL Data Block: Not Supported 00:19:03.376 Replay Protected Memory Block: Not Supported 00:19:03.376 00:19:03.376 Firmware Slot Information 00:19:03.376 ========================= 00:19:03.376 Active slot: 1 00:19:03.376 Slot 1 Firmware Revision: 1.0 00:19:03.376 00:19:03.376 00:19:03.376 Commands Supported and Effects 00:19:03.376 ============================== 00:19:03.376 Admin Commands 00:19:03.376 -------------- 00:19:03.376 Delete I/O Submission Queue (00h): Supported 00:19:03.376 Create I/O Submission Queue (01h): Supported 00:19:03.376 Get Log Page (02h): Supported 00:19:03.376 Delete I/O Completion Queue (04h): Supported 00:19:03.376 Create I/O Completion Queue (05h): Supported 00:19:03.376 Identify (06h): Supported 00:19:03.376 Abort (08h): Supported 00:19:03.376 Set Features (09h): Supported 00:19:03.376 Get Features (0Ah): Supported 00:19:03.376 Asynchronous Event Request (0Ch): Supported 00:19:03.376 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.376 Directive Send (19h): Supported 00:19:03.376 Directive Receive (1Ah): Supported 00:19:03.376 Virtualization Management (1Ch): Supported 00:19:03.376 Doorbell Buffer Config (7Ch): Supported 00:19:03.376 Format NVM (80h): Supported LBA-Change 00:19:03.376 I/O Commands 00:19:03.376 ------------ 00:19:03.376 Flush (00h): Supported LBA-Change 00:19:03.376 Write (01h): Supported LBA-Change 00:19:03.376 Read (02h): Supported 00:19:03.376 Compare (05h): Supported 00:19:03.376 Write Zeroes (08h): Supported LBA-Change 00:19:03.376 Dataset Management (09h): Supported LBA-Change 00:19:03.376 Unknown (0Ch): Supported 00:19:03.376 Unknown (12h): Supported 00:19:03.376 Copy (19h): Supported LBA-Change 00:19:03.376 Unknown (1Dh): Supported LBA-Change 00:19:03.376 00:19:03.376 Error Log 00:19:03.376 ========= 00:19:03.376 00:19:03.376 Arbitration 00:19:03.376 =========== 00:19:03.376 Arbitration Burst: no limit 00:19:03.376 00:19:03.376 Power Management 00:19:03.376 ================ 00:19:03.376 Number of Power States: 1 00:19:03.376 Current Power State: Power State #0 00:19:03.376 Power State #0: 00:19:03.376 Max Power: 25.00 W 00:19:03.376 Non-Operational State: Operational 00:19:03.376 Entry Latency: 16 microseconds 00:19:03.376 Exit Latency: 4 microseconds 00:19:03.376 Relative Read Throughput: 0 00:19:03.376 Relative Read Latency: 0 00:19:03.376 Relative Write Throughput: 0 00:19:03.376 Relative Write Latency: 0 00:19:03.376 Idle Power: Not Reported 00:19:03.376 Active Power: Not Reported 00:19:03.377 Non-Operational Permissive Mode: Not Supported 00:19:03.377 00:19:03.377 Health Information 00:19:03.377 ================== 00:19:03.377 Critical Warnings: 00:19:03.377 Available Spare Space: OK 00:19:03.377 Temperature: OK 00:19:03.377 Device Reliability: OK 00:19:03.377 Read Only: No 00:19:03.377 Volatile Memory Backup: OK 00:19:03.377 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.377 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.377 Available Spare: 0% 00:19:03.377 Available Spare Threshold: 0% 00:19:03.377 Life Percentage Used: 0% 00:19:03.377 Data Units Read: 2183 00:19:03.377 Data Units Written: 1863 00:19:03.377 Host Read Commands: 105780 00:19:03.377 Host Write Commands: 101550 00:19:03.377 Controller Busy Time: 0 minutes 00:19:03.377 Power Cycles: 0 00:19:03.377 Power On Hours: 0 hours 00:19:03.377 Unsafe Shutdowns: 0 00:19:03.377 Unrecoverable Media Errors: 0 00:19:03.377 Lifetime Error Log Entries: 0 00:19:03.377 Warning Temperature Time: 0 minutes 00:19:03.377 Critical Temperature Time: 0 minutes 00:19:03.377 00:19:03.377 Number of Queues 00:19:03.377 ================ 00:19:03.377 Number of I/O Submission Queues: 64 00:19:03.377 Number of I/O Completion Queues: 64 00:19:03.377 00:19:03.377 ZNS Specific Controller Data 00:19:03.377 ============================ 00:19:03.377 Zone Append Size Limit: 0 00:19:03.377 00:19:03.377 00:19:03.377 Active Namespaces 00:19:03.377 ================= 00:19:03.377 Namespace ID:1 00:19:03.377 Error Recovery Timeout: Unlimited 00:19:03.377 Command Set Identifier: NVM (00h) 00:19:03.377 Deallocate: Supported 00:19:03.377 Deallocated/Unwritten Error: Supported 00:19:03.377 Deallocated Read Value: All 0x00 00:19:03.377 Deallocate in Write Zeroes: Not Supported 00:19:03.377 Deallocated Guard Field: 0xFFFF 00:19:03.377 Flush: Supported 00:19:03.377 Reservation: Not Supported 00:19:03.377 Namespace Sharing Capabilities: Private 00:19:03.377 Size (in LBAs): 1048576 (4GiB) 00:19:03.377 Capacity (in LBAs): 1048576 (4GiB) 00:19:03.377 Utilization (in LBAs): 1048576 (4GiB) 00:19:03.377 Thin Provisioning: Not Supported 00:19:03.377 Per-NS Atomic Units: No 00:19:03.377 Maximum Single Source Range Length: 128 00:19:03.377 Maximum Copy Length: 128 00:19:03.377 Maximum Source Range Count: 128 00:19:03.377 NGUID/EUI64 Never Reused: No 00:19:03.377 Namespace Write Protected: No 00:19:03.377 Number of LBA Formats: 8 00:19:03.377 Current LBA Format: LBA Format #04 00:19:03.377 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.377 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.377 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.377 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.377 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.377 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.377 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.377 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.377 00:19:03.377 NVM Specific Namespace Data 00:19:03.377 =========================== 00:19:03.377 Logical Block Storage Tag Mask: 0 00:19:03.377 Protection Information Capabilities: 00:19:03.377 16b Guard Protection Information Storage Tag Support: No 00:19:03.377 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.377 Storage Tag Check Read Support: No 00:19:03.377 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Namespace ID:2 00:19:03.377 Error Recovery Timeout: Unlimited 00:19:03.377 Command Set Identifier: NVM (00h) 00:19:03.377 Deallocate: Supported 00:19:03.377 Deallocated/Unwritten Error: Supported 00:19:03.377 Deallocated Read Value: All 0x00 00:19:03.377 Deallocate in Write Zeroes: Not Supported 00:19:03.377 Deallocated Guard Field: 0xFFFF 00:19:03.377 Flush: Supported 00:19:03.377 Reservation: Not Supported 00:19:03.377 Namespace Sharing Capabilities: Private 00:19:03.377 Size (in LBAs): 1048576 (4GiB) 00:19:03.377 Capacity (in LBAs): 1048576 (4GiB) 00:19:03.377 Utilization (in LBAs): 1048576 (4GiB) 00:19:03.377 Thin Provisioning: Not Supported 00:19:03.377 Per-NS Atomic Units: No 00:19:03.377 Maximum Single Source Range Length: 128 00:19:03.377 Maximum Copy Length: 128 00:19:03.377 Maximum Source Range Count: 128 00:19:03.377 NGUID/EUI64 Never Reused: No 00:19:03.377 Namespace Write Protected: No 00:19:03.377 Number of LBA Formats: 8 00:19:03.377 Current LBA Format: LBA Format #04 00:19:03.377 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.377 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.377 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.377 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.377 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.377 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.377 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.377 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.377 00:19:03.377 NVM Specific Namespace Data 00:19:03.377 =========================== 00:19:03.377 Logical Block Storage Tag Mask: 0 00:19:03.377 Protection Information Capabilities: 00:19:03.377 16b Guard Protection Information Storage Tag Support: No 00:19:03.377 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.377 Storage Tag Check Read Support: No 00:19:03.377 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Namespace ID:3 00:19:03.377 Error Recovery Timeout: Unlimited 00:19:03.377 Command Set Identifier: NVM (00h) 00:19:03.377 Deallocate: Supported 00:19:03.377 Deallocated/Unwritten Error: Supported 00:19:03.377 Deallocated Read Value: All 0x00 00:19:03.377 Deallocate in Write Zeroes: Not Supported 00:19:03.377 Deallocated Guard Field: 0xFFFF 00:19:03.377 Flush: Supported 00:19:03.377 Reservation: Not Supported 00:19:03.377 Namespace Sharing Capabilities: Private 00:19:03.377 Size (in LBAs): 1048576 (4GiB) 00:19:03.377 Capacity (in LBAs): 1048576 (4GiB) 00:19:03.377 Utilization (in LBAs): 1048576 (4GiB) 00:19:03.377 Thin Provisioning: Not Supported 00:19:03.377 Per-NS Atomic Units: No 00:19:03.377 Maximum Single Source Range Length: 128 00:19:03.377 Maximum Copy Length: 128 00:19:03.377 Maximum Source Range Count: 128 00:19:03.377 NGUID/EUI64 Never Reused: No 00:19:03.377 Namespace Write Protected: No 00:19:03.377 Number of LBA Formats: 8 00:19:03.377 Current LBA Format: LBA Format #04 00:19:03.377 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.377 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.377 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.377 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.377 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.377 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.377 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.377 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.377 00:19:03.377 NVM Specific Namespace Data 00:19:03.377 =========================== 00:19:03.377 Logical Block Storage Tag Mask: 0 00:19:03.377 Protection Information Capabilities: 00:19:03.377 16b Guard Protection Information Storage Tag Support: No 00:19:03.377 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.377 Storage Tag Check Read Support: No 00:19:03.377 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.377 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.378 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.378 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.378 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.378 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.378 11:48:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:03.378 11:48:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:03.637 ===================================================== 00:19:03.637 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:03.637 ===================================================== 00:19:03.637 Controller Capabilities/Features 00:19:03.637 ================================ 00:19:03.637 Vendor ID: 1b36 00:19:03.637 Subsystem Vendor ID: 1af4 00:19:03.637 Serial Number: 12340 00:19:03.637 Model Number: QEMU NVMe Ctrl 00:19:03.637 Firmware Version: 8.0.0 00:19:03.637 Recommended Arb Burst: 6 00:19:03.637 IEEE OUI Identifier: 00 54 52 00:19:03.637 Multi-path I/O 00:19:03.637 May have multiple subsystem ports: No 00:19:03.637 May have multiple controllers: No 00:19:03.637 Associated with SR-IOV VF: No 00:19:03.637 Max Data Transfer Size: 524288 00:19:03.637 Max Number of Namespaces: 256 00:19:03.637 Max Number of I/O Queues: 64 00:19:03.637 NVMe Specification Version (VS): 1.4 00:19:03.637 NVMe Specification Version (Identify): 1.4 00:19:03.637 Maximum Queue Entries: 2048 00:19:03.637 Contiguous Queues Required: Yes 00:19:03.637 Arbitration Mechanisms Supported 00:19:03.637 Weighted Round Robin: Not Supported 00:19:03.637 Vendor Specific: Not Supported 00:19:03.637 Reset Timeout: 7500 ms 00:19:03.637 Doorbell Stride: 4 bytes 00:19:03.637 NVM Subsystem Reset: Not Supported 00:19:03.637 Command Sets Supported 00:19:03.637 NVM Command Set: Supported 00:19:03.637 Boot Partition: Not Supported 00:19:03.637 Memory Page Size Minimum: 4096 bytes 00:19:03.637 Memory Page Size Maximum: 65536 bytes 00:19:03.637 Persistent Memory Region: Not Supported 00:19:03.637 Optional Asynchronous Events Supported 00:19:03.637 Namespace Attribute Notices: Supported 00:19:03.637 Firmware Activation Notices: Not Supported 00:19:03.637 ANA Change Notices: Not Supported 00:19:03.637 PLE Aggregate Log Change Notices: Not Supported 00:19:03.637 LBA Status Info Alert Notices: Not Supported 00:19:03.637 EGE Aggregate Log Change Notices: Not Supported 00:19:03.637 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.637 Zone Descriptor Change Notices: Not Supported 00:19:03.637 Discovery Log Change Notices: Not Supported 00:19:03.637 Controller Attributes 00:19:03.637 128-bit Host Identifier: Not Supported 00:19:03.637 Non-Operational Permissive Mode: Not Supported 00:19:03.637 NVM Sets: Not Supported 00:19:03.637 Read Recovery Levels: Not Supported 00:19:03.637 Endurance Groups: Not Supported 00:19:03.637 Predictable Latency Mode: Not Supported 00:19:03.637 Traffic Based Keep ALive: Not Supported 00:19:03.637 Namespace Granularity: Not Supported 00:19:03.637 SQ Associations: Not Supported 00:19:03.637 UUID List: Not Supported 00:19:03.637 Multi-Domain Subsystem: Not Supported 00:19:03.637 Fixed Capacity Management: Not Supported 00:19:03.637 Variable Capacity Management: Not Supported 00:19:03.637 Delete Endurance Group: Not Supported 00:19:03.637 Delete NVM Set: Not Supported 00:19:03.637 Extended LBA Formats Supported: Supported 00:19:03.637 Flexible Data Placement Supported: Not Supported 00:19:03.637 00:19:03.637 Controller Memory Buffer Support 00:19:03.637 ================================ 00:19:03.637 Supported: No 00:19:03.637 00:19:03.637 Persistent Memory Region Support 00:19:03.637 ================================ 00:19:03.637 Supported: No 00:19:03.637 00:19:03.637 Admin Command Set Attributes 00:19:03.637 ============================ 00:19:03.637 Security Send/Receive: Not Supported 00:19:03.637 Format NVM: Supported 00:19:03.637 Firmware Activate/Download: Not Supported 00:19:03.637 Namespace Management: Supported 00:19:03.637 Device Self-Test: Not Supported 00:19:03.637 Directives: Supported 00:19:03.637 NVMe-MI: Not Supported 00:19:03.637 Virtualization Management: Not Supported 00:19:03.637 Doorbell Buffer Config: Supported 00:19:03.637 Get LBA Status Capability: Not Supported 00:19:03.637 Command & Feature Lockdown Capability: Not Supported 00:19:03.637 Abort Command Limit: 4 00:19:03.637 Async Event Request Limit: 4 00:19:03.637 Number of Firmware Slots: N/A 00:19:03.637 Firmware Slot 1 Read-Only: N/A 00:19:03.637 Firmware Activation Without Reset: N/A 00:19:03.637 Multiple Update Detection Support: N/A 00:19:03.637 Firmware Update Granularity: No Information Provided 00:19:03.637 Per-Namespace SMART Log: Yes 00:19:03.637 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.637 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:03.637 Command Effects Log Page: Supported 00:19:03.637 Get Log Page Extended Data: Supported 00:19:03.637 Telemetry Log Pages: Not Supported 00:19:03.638 Persistent Event Log Pages: Not Supported 00:19:03.638 Supported Log Pages Log Page: May Support 00:19:03.638 Commands Supported & Effects Log Page: Not Supported 00:19:03.638 Feature Identifiers & Effects Log Page:May Support 00:19:03.638 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.638 Data Area 4 for Telemetry Log: Not Supported 00:19:03.638 Error Log Page Entries Supported: 1 00:19:03.638 Keep Alive: Not Supported 00:19:03.638 00:19:03.638 NVM Command Set Attributes 00:19:03.638 ========================== 00:19:03.638 Submission Queue Entry Size 00:19:03.638 Max: 64 00:19:03.638 Min: 64 00:19:03.638 Completion Queue Entry Size 00:19:03.638 Max: 16 00:19:03.638 Min: 16 00:19:03.638 Number of Namespaces: 256 00:19:03.638 Compare Command: Supported 00:19:03.638 Write Uncorrectable Command: Not Supported 00:19:03.638 Dataset Management Command: Supported 00:19:03.638 Write Zeroes Command: Supported 00:19:03.638 Set Features Save Field: Supported 00:19:03.638 Reservations: Not Supported 00:19:03.638 Timestamp: Supported 00:19:03.638 Copy: Supported 00:19:03.638 Volatile Write Cache: Present 00:19:03.638 Atomic Write Unit (Normal): 1 00:19:03.638 Atomic Write Unit (PFail): 1 00:19:03.638 Atomic Compare & Write Unit: 1 00:19:03.638 Fused Compare & Write: Not Supported 00:19:03.638 Scatter-Gather List 00:19:03.638 SGL Command Set: Supported 00:19:03.638 SGL Keyed: Not Supported 00:19:03.638 SGL Bit Bucket Descriptor: Not Supported 00:19:03.638 SGL Metadata Pointer: Not Supported 00:19:03.638 Oversized SGL: Not Supported 00:19:03.638 SGL Metadata Address: Not Supported 00:19:03.638 SGL Offset: Not Supported 00:19:03.638 Transport SGL Data Block: Not Supported 00:19:03.638 Replay Protected Memory Block: Not Supported 00:19:03.638 00:19:03.638 Firmware Slot Information 00:19:03.638 ========================= 00:19:03.638 Active slot: 1 00:19:03.638 Slot 1 Firmware Revision: 1.0 00:19:03.638 00:19:03.638 00:19:03.638 Commands Supported and Effects 00:19:03.638 ============================== 00:19:03.638 Admin Commands 00:19:03.638 -------------- 00:19:03.638 Delete I/O Submission Queue (00h): Supported 00:19:03.638 Create I/O Submission Queue (01h): Supported 00:19:03.638 Get Log Page (02h): Supported 00:19:03.638 Delete I/O Completion Queue (04h): Supported 00:19:03.638 Create I/O Completion Queue (05h): Supported 00:19:03.638 Identify (06h): Supported 00:19:03.638 Abort (08h): Supported 00:19:03.638 Set Features (09h): Supported 00:19:03.638 Get Features (0Ah): Supported 00:19:03.638 Asynchronous Event Request (0Ch): Supported 00:19:03.638 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.638 Directive Send (19h): Supported 00:19:03.638 Directive Receive (1Ah): Supported 00:19:03.638 Virtualization Management (1Ch): Supported 00:19:03.638 Doorbell Buffer Config (7Ch): Supported 00:19:03.638 Format NVM (80h): Supported LBA-Change 00:19:03.638 I/O Commands 00:19:03.638 ------------ 00:19:03.638 Flush (00h): Supported LBA-Change 00:19:03.638 Write (01h): Supported LBA-Change 00:19:03.638 Read (02h): Supported 00:19:03.638 Compare (05h): Supported 00:19:03.638 Write Zeroes (08h): Supported LBA-Change 00:19:03.638 Dataset Management (09h): Supported LBA-Change 00:19:03.638 Unknown (0Ch): Supported 00:19:03.638 Unknown (12h): Supported 00:19:03.638 Copy (19h): Supported LBA-Change 00:19:03.638 Unknown (1Dh): Supported LBA-Change 00:19:03.638 00:19:03.638 Error Log 00:19:03.638 ========= 00:19:03.638 00:19:03.638 Arbitration 00:19:03.638 =========== 00:19:03.638 Arbitration Burst: no limit 00:19:03.638 00:19:03.638 Power Management 00:19:03.638 ================ 00:19:03.638 Number of Power States: 1 00:19:03.638 Current Power State: Power State #0 00:19:03.638 Power State #0: 00:19:03.638 Max Power: 25.00 W 00:19:03.638 Non-Operational State: Operational 00:19:03.638 Entry Latency: 16 microseconds 00:19:03.638 Exit Latency: 4 microseconds 00:19:03.638 Relative Read Throughput: 0 00:19:03.638 Relative Read Latency: 0 00:19:03.638 Relative Write Throughput: 0 00:19:03.638 Relative Write Latency: 0 00:19:03.638 Idle Power: Not Reported 00:19:03.638 Active Power: Not Reported 00:19:03.638 Non-Operational Permissive Mode: Not Supported 00:19:03.638 00:19:03.638 Health Information 00:19:03.638 ================== 00:19:03.638 Critical Warnings: 00:19:03.638 Available Spare Space: OK 00:19:03.638 Temperature: OK 00:19:03.638 Device Reliability: OK 00:19:03.638 Read Only: No 00:19:03.638 Volatile Memory Backup: OK 00:19:03.638 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.638 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.638 Available Spare: 0% 00:19:03.638 Available Spare Threshold: 0% 00:19:03.638 Life Percentage Used: 0% 00:19:03.638 Data Units Read: 687 00:19:03.638 Data Units Written: 579 00:19:03.638 Host Read Commands: 34530 00:19:03.638 Host Write Commands: 33568 00:19:03.638 Controller Busy Time: 0 minutes 00:19:03.638 Power Cycles: 0 00:19:03.638 Power On Hours: 0 hours 00:19:03.638 Unsafe Shutdowns: 0 00:19:03.638 Unrecoverable Media Errors: 0 00:19:03.638 Lifetime Error Log Entries: 0 00:19:03.638 Warning Temperature Time: 0 minutes 00:19:03.638 Critical Temperature Time: 0 minutes 00:19:03.638 00:19:03.638 Number of Queues 00:19:03.638 ================ 00:19:03.638 Number of I/O Submission Queues: 64 00:19:03.638 Number of I/O Completion Queues: 64 00:19:03.638 00:19:03.638 ZNS Specific Controller Data 00:19:03.638 ============================ 00:19:03.638 Zone Append Size Limit: 0 00:19:03.638 00:19:03.638 00:19:03.638 Active Namespaces 00:19:03.638 ================= 00:19:03.638 Namespace ID:1 00:19:03.638 Error Recovery Timeout: Unlimited 00:19:03.638 Command Set Identifier: NVM (00h) 00:19:03.638 Deallocate: Supported 00:19:03.638 Deallocated/Unwritten Error: Supported 00:19:03.638 Deallocated Read Value: All 0x00 00:19:03.638 Deallocate in Write Zeroes: Not Supported 00:19:03.638 Deallocated Guard Field: 0xFFFF 00:19:03.638 Flush: Supported 00:19:03.638 Reservation: Not Supported 00:19:03.638 Metadata Transferred as: Separate Metadata Buffer 00:19:03.638 Namespace Sharing Capabilities: Private 00:19:03.638 Size (in LBAs): 1548666 (5GiB) 00:19:03.638 Capacity (in LBAs): 1548666 (5GiB) 00:19:03.638 Utilization (in LBAs): 1548666 (5GiB) 00:19:03.638 Thin Provisioning: Not Supported 00:19:03.638 Per-NS Atomic Units: No 00:19:03.638 Maximum Single Source Range Length: 128 00:19:03.638 Maximum Copy Length: 128 00:19:03.638 Maximum Source Range Count: 128 00:19:03.638 NGUID/EUI64 Never Reused: No 00:19:03.638 Namespace Write Protected: No 00:19:03.638 Number of LBA Formats: 8 00:19:03.638 Current LBA Format: LBA Format #07 00:19:03.638 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.638 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.638 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.638 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.638 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.638 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.638 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.638 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.638 00:19:03.638 NVM Specific Namespace Data 00:19:03.638 =========================== 00:19:03.638 Logical Block Storage Tag Mask: 0 00:19:03.638 Protection Information Capabilities: 00:19:03.638 16b Guard Protection Information Storage Tag Support: No 00:19:03.638 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.638 Storage Tag Check Read Support: No 00:19:03.638 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.638 11:48:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:03.639 11:48:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:19:03.898 ===================================================== 00:19:03.898 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:03.898 ===================================================== 00:19:03.898 Controller Capabilities/Features 00:19:03.898 ================================ 00:19:03.898 Vendor ID: 1b36 00:19:03.898 Subsystem Vendor ID: 1af4 00:19:03.898 Serial Number: 12341 00:19:03.898 Model Number: QEMU NVMe Ctrl 00:19:03.898 Firmware Version: 8.0.0 00:19:03.898 Recommended Arb Burst: 6 00:19:03.898 IEEE OUI Identifier: 00 54 52 00:19:03.898 Multi-path I/O 00:19:03.898 May have multiple subsystem ports: No 00:19:03.898 May have multiple controllers: No 00:19:03.898 Associated with SR-IOV VF: No 00:19:03.898 Max Data Transfer Size: 524288 00:19:03.898 Max Number of Namespaces: 256 00:19:03.898 Max Number of I/O Queues: 64 00:19:03.898 NVMe Specification Version (VS): 1.4 00:19:03.898 NVMe Specification Version (Identify): 1.4 00:19:03.898 Maximum Queue Entries: 2048 00:19:03.898 Contiguous Queues Required: Yes 00:19:03.898 Arbitration Mechanisms Supported 00:19:03.898 Weighted Round Robin: Not Supported 00:19:03.898 Vendor Specific: Not Supported 00:19:03.898 Reset Timeout: 7500 ms 00:19:03.898 Doorbell Stride: 4 bytes 00:19:03.898 NVM Subsystem Reset: Not Supported 00:19:03.898 Command Sets Supported 00:19:03.898 NVM Command Set: Supported 00:19:03.898 Boot Partition: Not Supported 00:19:03.898 Memory Page Size Minimum: 4096 bytes 00:19:03.898 Memory Page Size Maximum: 65536 bytes 00:19:03.898 Persistent Memory Region: Not Supported 00:19:03.898 Optional Asynchronous Events Supported 00:19:03.898 Namespace Attribute Notices: Supported 00:19:03.898 Firmware Activation Notices: Not Supported 00:19:03.898 ANA Change Notices: Not Supported 00:19:03.898 PLE Aggregate Log Change Notices: Not Supported 00:19:03.898 LBA Status Info Alert Notices: Not Supported 00:19:03.898 EGE Aggregate Log Change Notices: Not Supported 00:19:03.898 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.898 Zone Descriptor Change Notices: Not Supported 00:19:03.898 Discovery Log Change Notices: Not Supported 00:19:03.898 Controller Attributes 00:19:03.898 128-bit Host Identifier: Not Supported 00:19:03.898 Non-Operational Permissive Mode: Not Supported 00:19:03.898 NVM Sets: Not Supported 00:19:03.898 Read Recovery Levels: Not Supported 00:19:03.898 Endurance Groups: Not Supported 00:19:03.898 Predictable Latency Mode: Not Supported 00:19:03.898 Traffic Based Keep ALive: Not Supported 00:19:03.898 Namespace Granularity: Not Supported 00:19:03.898 SQ Associations: Not Supported 00:19:03.898 UUID List: Not Supported 00:19:03.898 Multi-Domain Subsystem: Not Supported 00:19:03.898 Fixed Capacity Management: Not Supported 00:19:03.898 Variable Capacity Management: Not Supported 00:19:03.898 Delete Endurance Group: Not Supported 00:19:03.898 Delete NVM Set: Not Supported 00:19:03.898 Extended LBA Formats Supported: Supported 00:19:03.898 Flexible Data Placement Supported: Not Supported 00:19:03.898 00:19:03.898 Controller Memory Buffer Support 00:19:03.898 ================================ 00:19:03.898 Supported: No 00:19:03.898 00:19:03.898 Persistent Memory Region Support 00:19:03.898 ================================ 00:19:03.898 Supported: No 00:19:03.898 00:19:03.898 Admin Command Set Attributes 00:19:03.898 ============================ 00:19:03.898 Security Send/Receive: Not Supported 00:19:03.898 Format NVM: Supported 00:19:03.898 Firmware Activate/Download: Not Supported 00:19:03.898 Namespace Management: Supported 00:19:03.898 Device Self-Test: Not Supported 00:19:03.898 Directives: Supported 00:19:03.898 NVMe-MI: Not Supported 00:19:03.898 Virtualization Management: Not Supported 00:19:03.898 Doorbell Buffer Config: Supported 00:19:03.898 Get LBA Status Capability: Not Supported 00:19:03.898 Command & Feature Lockdown Capability: Not Supported 00:19:03.898 Abort Command Limit: 4 00:19:03.898 Async Event Request Limit: 4 00:19:03.898 Number of Firmware Slots: N/A 00:19:03.898 Firmware Slot 1 Read-Only: N/A 00:19:03.898 Firmware Activation Without Reset: N/A 00:19:03.898 Multiple Update Detection Support: N/A 00:19:03.898 Firmware Update Granularity: No Information Provided 00:19:03.898 Per-Namespace SMART Log: Yes 00:19:03.898 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.899 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:03.899 Command Effects Log Page: Supported 00:19:03.899 Get Log Page Extended Data: Supported 00:19:03.899 Telemetry Log Pages: Not Supported 00:19:03.899 Persistent Event Log Pages: Not Supported 00:19:03.899 Supported Log Pages Log Page: May Support 00:19:03.899 Commands Supported & Effects Log Page: Not Supported 00:19:03.899 Feature Identifiers & Effects Log Page:May Support 00:19:03.899 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.899 Data Area 4 for Telemetry Log: Not Supported 00:19:03.899 Error Log Page Entries Supported: 1 00:19:03.899 Keep Alive: Not Supported 00:19:03.899 00:19:03.899 NVM Command Set Attributes 00:19:03.899 ========================== 00:19:03.899 Submission Queue Entry Size 00:19:03.899 Max: 64 00:19:03.899 Min: 64 00:19:03.899 Completion Queue Entry Size 00:19:03.899 Max: 16 00:19:03.899 Min: 16 00:19:03.899 Number of Namespaces: 256 00:19:03.899 Compare Command: Supported 00:19:03.899 Write Uncorrectable Command: Not Supported 00:19:03.899 Dataset Management Command: Supported 00:19:03.899 Write Zeroes Command: Supported 00:19:03.899 Set Features Save Field: Supported 00:19:03.899 Reservations: Not Supported 00:19:03.899 Timestamp: Supported 00:19:03.899 Copy: Supported 00:19:03.899 Volatile Write Cache: Present 00:19:03.899 Atomic Write Unit (Normal): 1 00:19:03.899 Atomic Write Unit (PFail): 1 00:19:03.899 Atomic Compare & Write Unit: 1 00:19:03.899 Fused Compare & Write: Not Supported 00:19:03.899 Scatter-Gather List 00:19:03.899 SGL Command Set: Supported 00:19:03.899 SGL Keyed: Not Supported 00:19:03.899 SGL Bit Bucket Descriptor: Not Supported 00:19:03.899 SGL Metadata Pointer: Not Supported 00:19:03.899 Oversized SGL: Not Supported 00:19:03.899 SGL Metadata Address: Not Supported 00:19:03.899 SGL Offset: Not Supported 00:19:03.899 Transport SGL Data Block: Not Supported 00:19:03.899 Replay Protected Memory Block: Not Supported 00:19:03.899 00:19:03.899 Firmware Slot Information 00:19:03.899 ========================= 00:19:03.899 Active slot: 1 00:19:03.899 Slot 1 Firmware Revision: 1.0 00:19:03.899 00:19:03.899 00:19:03.899 Commands Supported and Effects 00:19:03.899 ============================== 00:19:03.899 Admin Commands 00:19:03.899 -------------- 00:19:03.899 Delete I/O Submission Queue (00h): Supported 00:19:03.899 Create I/O Submission Queue (01h): Supported 00:19:03.899 Get Log Page (02h): Supported 00:19:03.899 Delete I/O Completion Queue (04h): Supported 00:19:03.899 Create I/O Completion Queue (05h): Supported 00:19:03.899 Identify (06h): Supported 00:19:03.899 Abort (08h): Supported 00:19:03.899 Set Features (09h): Supported 00:19:03.899 Get Features (0Ah): Supported 00:19:03.899 Asynchronous Event Request (0Ch): Supported 00:19:03.899 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.899 Directive Send (19h): Supported 00:19:03.899 Directive Receive (1Ah): Supported 00:19:03.899 Virtualization Management (1Ch): Supported 00:19:03.899 Doorbell Buffer Config (7Ch): Supported 00:19:03.899 Format NVM (80h): Supported LBA-Change 00:19:03.899 I/O Commands 00:19:03.899 ------------ 00:19:03.899 Flush (00h): Supported LBA-Change 00:19:03.899 Write (01h): Supported LBA-Change 00:19:03.899 Read (02h): Supported 00:19:03.899 Compare (05h): Supported 00:19:03.899 Write Zeroes (08h): Supported LBA-Change 00:19:03.899 Dataset Management (09h): Supported LBA-Change 00:19:03.899 Unknown (0Ch): Supported 00:19:03.899 Unknown (12h): Supported 00:19:03.899 Copy (19h): Supported LBA-Change 00:19:03.899 Unknown (1Dh): Supported LBA-Change 00:19:03.899 00:19:03.899 Error Log 00:19:03.899 ========= 00:19:03.899 00:19:03.899 Arbitration 00:19:03.899 =========== 00:19:03.899 Arbitration Burst: no limit 00:19:03.899 00:19:03.899 Power Management 00:19:03.899 ================ 00:19:03.899 Number of Power States: 1 00:19:03.899 Current Power State: Power State #0 00:19:03.899 Power State #0: 00:19:03.899 Max Power: 25.00 W 00:19:03.899 Non-Operational State: Operational 00:19:03.899 Entry Latency: 16 microseconds 00:19:03.899 Exit Latency: 4 microseconds 00:19:03.899 Relative Read Throughput: 0 00:19:03.899 Relative Read Latency: 0 00:19:03.899 Relative Write Throughput: 0 00:19:03.899 Relative Write Latency: 0 00:19:03.899 Idle Power: Not Reported 00:19:03.899 Active Power: Not Reported 00:19:03.899 Non-Operational Permissive Mode: Not Supported 00:19:03.899 00:19:03.899 Health Information 00:19:03.899 ================== 00:19:03.899 Critical Warnings: 00:19:03.899 Available Spare Space: OK 00:19:03.899 Temperature: OK 00:19:03.899 Device Reliability: OK 00:19:03.899 Read Only: No 00:19:03.899 Volatile Memory Backup: OK 00:19:03.899 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.899 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.899 Available Spare: 0% 00:19:03.899 Available Spare Threshold: 0% 00:19:03.899 Life Percentage Used: 0% 00:19:03.899 Data Units Read: 1039 00:19:03.899 Data Units Written: 830 00:19:03.899 Host Read Commands: 51433 00:19:03.899 Host Write Commands: 48608 00:19:03.899 Controller Busy Time: 0 minutes 00:19:03.899 Power Cycles: 0 00:19:03.899 Power On Hours: 0 hours 00:19:03.899 Unsafe Shutdowns: 0 00:19:03.899 Unrecoverable Media Errors: 0 00:19:03.899 Lifetime Error Log Entries: 0 00:19:03.899 Warning Temperature Time: 0 minutes 00:19:03.899 Critical Temperature Time: 0 minutes 00:19:03.899 00:19:03.899 Number of Queues 00:19:03.899 ================ 00:19:03.899 Number of I/O Submission Queues: 64 00:19:03.899 Number of I/O Completion Queues: 64 00:19:03.899 00:19:03.899 ZNS Specific Controller Data 00:19:03.899 ============================ 00:19:03.899 Zone Append Size Limit: 0 00:19:03.899 00:19:03.899 00:19:03.899 Active Namespaces 00:19:03.899 ================= 00:19:03.899 Namespace ID:1 00:19:03.899 Error Recovery Timeout: Unlimited 00:19:03.899 Command Set Identifier: NVM (00h) 00:19:03.899 Deallocate: Supported 00:19:03.899 Deallocated/Unwritten Error: Supported 00:19:03.899 Deallocated Read Value: All 0x00 00:19:03.899 Deallocate in Write Zeroes: Not Supported 00:19:03.899 Deallocated Guard Field: 0xFFFF 00:19:03.899 Flush: Supported 00:19:03.899 Reservation: Not Supported 00:19:03.899 Namespace Sharing Capabilities: Private 00:19:03.899 Size (in LBAs): 1310720 (5GiB) 00:19:03.899 Capacity (in LBAs): 1310720 (5GiB) 00:19:03.899 Utilization (in LBAs): 1310720 (5GiB) 00:19:03.899 Thin Provisioning: Not Supported 00:19:03.899 Per-NS Atomic Units: No 00:19:03.899 Maximum Single Source Range Length: 128 00:19:03.899 Maximum Copy Length: 128 00:19:03.899 Maximum Source Range Count: 128 00:19:03.899 NGUID/EUI64 Never Reused: No 00:19:03.899 Namespace Write Protected: No 00:19:03.899 Number of LBA Formats: 8 00:19:03.899 Current LBA Format: LBA Format #04 00:19:03.899 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.899 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.899 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.899 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.899 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.899 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.899 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.899 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.899 00:19:03.899 NVM Specific Namespace Data 00:19:03.899 =========================== 00:19:03.899 Logical Block Storage Tag Mask: 0 00:19:03.899 Protection Information Capabilities: 00:19:03.899 16b Guard Protection Information Storage Tag Support: No 00:19:03.899 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.899 Storage Tag Check Read Support: No 00:19:03.899 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.899 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.899 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.899 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.899 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.899 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.899 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.899 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.900 11:48:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:03.900 11:48:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:19:04.467 ===================================================== 00:19:04.467 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:04.467 ===================================================== 00:19:04.467 Controller Capabilities/Features 00:19:04.467 ================================ 00:19:04.467 Vendor ID: 1b36 00:19:04.467 Subsystem Vendor ID: 1af4 00:19:04.467 Serial Number: 12342 00:19:04.467 Model Number: QEMU NVMe Ctrl 00:19:04.467 Firmware Version: 8.0.0 00:19:04.467 Recommended Arb Burst: 6 00:19:04.467 IEEE OUI Identifier: 00 54 52 00:19:04.467 Multi-path I/O 00:19:04.467 May have multiple subsystem ports: No 00:19:04.467 May have multiple controllers: No 00:19:04.467 Associated with SR-IOV VF: No 00:19:04.467 Max Data Transfer Size: 524288 00:19:04.467 Max Number of Namespaces: 256 00:19:04.467 Max Number of I/O Queues: 64 00:19:04.467 NVMe Specification Version (VS): 1.4 00:19:04.467 NVMe Specification Version (Identify): 1.4 00:19:04.467 Maximum Queue Entries: 2048 00:19:04.467 Contiguous Queues Required: Yes 00:19:04.467 Arbitration Mechanisms Supported 00:19:04.467 Weighted Round Robin: Not Supported 00:19:04.467 Vendor Specific: Not Supported 00:19:04.467 Reset Timeout: 7500 ms 00:19:04.467 Doorbell Stride: 4 bytes 00:19:04.467 NVM Subsystem Reset: Not Supported 00:19:04.467 Command Sets Supported 00:19:04.467 NVM Command Set: Supported 00:19:04.467 Boot Partition: Not Supported 00:19:04.467 Memory Page Size Minimum: 4096 bytes 00:19:04.467 Memory Page Size Maximum: 65536 bytes 00:19:04.467 Persistent Memory Region: Not Supported 00:19:04.467 Optional Asynchronous Events Supported 00:19:04.467 Namespace Attribute Notices: Supported 00:19:04.467 Firmware Activation Notices: Not Supported 00:19:04.467 ANA Change Notices: Not Supported 00:19:04.467 PLE Aggregate Log Change Notices: Not Supported 00:19:04.467 LBA Status Info Alert Notices: Not Supported 00:19:04.467 EGE Aggregate Log Change Notices: Not Supported 00:19:04.467 Normal NVM Subsystem Shutdown event: Not Supported 00:19:04.467 Zone Descriptor Change Notices: Not Supported 00:19:04.467 Discovery Log Change Notices: Not Supported 00:19:04.467 Controller Attributes 00:19:04.467 128-bit Host Identifier: Not Supported 00:19:04.467 Non-Operational Permissive Mode: Not Supported 00:19:04.467 NVM Sets: Not Supported 00:19:04.467 Read Recovery Levels: Not Supported 00:19:04.467 Endurance Groups: Not Supported 00:19:04.467 Predictable Latency Mode: Not Supported 00:19:04.467 Traffic Based Keep ALive: Not Supported 00:19:04.467 Namespace Granularity: Not Supported 00:19:04.467 SQ Associations: Not Supported 00:19:04.467 UUID List: Not Supported 00:19:04.467 Multi-Domain Subsystem: Not Supported 00:19:04.467 Fixed Capacity Management: Not Supported 00:19:04.467 Variable Capacity Management: Not Supported 00:19:04.467 Delete Endurance Group: Not Supported 00:19:04.467 Delete NVM Set: Not Supported 00:19:04.467 Extended LBA Formats Supported: Supported 00:19:04.467 Flexible Data Placement Supported: Not Supported 00:19:04.467 00:19:04.467 Controller Memory Buffer Support 00:19:04.467 ================================ 00:19:04.467 Supported: No 00:19:04.467 00:19:04.467 Persistent Memory Region Support 00:19:04.467 ================================ 00:19:04.467 Supported: No 00:19:04.468 00:19:04.468 Admin Command Set Attributes 00:19:04.468 ============================ 00:19:04.468 Security Send/Receive: Not Supported 00:19:04.468 Format NVM: Supported 00:19:04.468 Firmware Activate/Download: Not Supported 00:19:04.468 Namespace Management: Supported 00:19:04.468 Device Self-Test: Not Supported 00:19:04.468 Directives: Supported 00:19:04.468 NVMe-MI: Not Supported 00:19:04.468 Virtualization Management: Not Supported 00:19:04.468 Doorbell Buffer Config: Supported 00:19:04.468 Get LBA Status Capability: Not Supported 00:19:04.468 Command & Feature Lockdown Capability: Not Supported 00:19:04.468 Abort Command Limit: 4 00:19:04.468 Async Event Request Limit: 4 00:19:04.468 Number of Firmware Slots: N/A 00:19:04.468 Firmware Slot 1 Read-Only: N/A 00:19:04.468 Firmware Activation Without Reset: N/A 00:19:04.468 Multiple Update Detection Support: N/A 00:19:04.468 Firmware Update Granularity: No Information Provided 00:19:04.468 Per-Namespace SMART Log: Yes 00:19:04.468 Asymmetric Namespace Access Log Page: Not Supported 00:19:04.468 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:04.468 Command Effects Log Page: Supported 00:19:04.468 Get Log Page Extended Data: Supported 00:19:04.468 Telemetry Log Pages: Not Supported 00:19:04.468 Persistent Event Log Pages: Not Supported 00:19:04.468 Supported Log Pages Log Page: May Support 00:19:04.468 Commands Supported & Effects Log Page: Not Supported 00:19:04.468 Feature Identifiers & Effects Log Page:May Support 00:19:04.468 NVMe-MI Commands & Effects Log Page: May Support 00:19:04.468 Data Area 4 for Telemetry Log: Not Supported 00:19:04.468 Error Log Page Entries Supported: 1 00:19:04.468 Keep Alive: Not Supported 00:19:04.468 00:19:04.468 NVM Command Set Attributes 00:19:04.468 ========================== 00:19:04.468 Submission Queue Entry Size 00:19:04.468 Max: 64 00:19:04.468 Min: 64 00:19:04.468 Completion Queue Entry Size 00:19:04.468 Max: 16 00:19:04.468 Min: 16 00:19:04.468 Number of Namespaces: 256 00:19:04.468 Compare Command: Supported 00:19:04.468 Write Uncorrectable Command: Not Supported 00:19:04.468 Dataset Management Command: Supported 00:19:04.468 Write Zeroes Command: Supported 00:19:04.468 Set Features Save Field: Supported 00:19:04.468 Reservations: Not Supported 00:19:04.468 Timestamp: Supported 00:19:04.468 Copy: Supported 00:19:04.468 Volatile Write Cache: Present 00:19:04.468 Atomic Write Unit (Normal): 1 00:19:04.468 Atomic Write Unit (PFail): 1 00:19:04.468 Atomic Compare & Write Unit: 1 00:19:04.468 Fused Compare & Write: Not Supported 00:19:04.468 Scatter-Gather List 00:19:04.468 SGL Command Set: Supported 00:19:04.468 SGL Keyed: Not Supported 00:19:04.468 SGL Bit Bucket Descriptor: Not Supported 00:19:04.468 SGL Metadata Pointer: Not Supported 00:19:04.468 Oversized SGL: Not Supported 00:19:04.468 SGL Metadata Address: Not Supported 00:19:04.468 SGL Offset: Not Supported 00:19:04.468 Transport SGL Data Block: Not Supported 00:19:04.468 Replay Protected Memory Block: Not Supported 00:19:04.468 00:19:04.468 Firmware Slot Information 00:19:04.468 ========================= 00:19:04.468 Active slot: 1 00:19:04.468 Slot 1 Firmware Revision: 1.0 00:19:04.468 00:19:04.468 00:19:04.468 Commands Supported and Effects 00:19:04.468 ============================== 00:19:04.468 Admin Commands 00:19:04.468 -------------- 00:19:04.468 Delete I/O Submission Queue (00h): Supported 00:19:04.468 Create I/O Submission Queue (01h): Supported 00:19:04.468 Get Log Page (02h): Supported 00:19:04.468 Delete I/O Completion Queue (04h): Supported 00:19:04.468 Create I/O Completion Queue (05h): Supported 00:19:04.468 Identify (06h): Supported 00:19:04.468 Abort (08h): Supported 00:19:04.468 Set Features (09h): Supported 00:19:04.468 Get Features (0Ah): Supported 00:19:04.468 Asynchronous Event Request (0Ch): Supported 00:19:04.468 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:04.468 Directive Send (19h): Supported 00:19:04.468 Directive Receive (1Ah): Supported 00:19:04.468 Virtualization Management (1Ch): Supported 00:19:04.468 Doorbell Buffer Config (7Ch): Supported 00:19:04.468 Format NVM (80h): Supported LBA-Change 00:19:04.468 I/O Commands 00:19:04.468 ------------ 00:19:04.468 Flush (00h): Supported LBA-Change 00:19:04.468 Write (01h): Supported LBA-Change 00:19:04.468 Read (02h): Supported 00:19:04.468 Compare (05h): Supported 00:19:04.468 Write Zeroes (08h): Supported LBA-Change 00:19:04.468 Dataset Management (09h): Supported LBA-Change 00:19:04.468 Unknown (0Ch): Supported 00:19:04.468 Unknown (12h): Supported 00:19:04.468 Copy (19h): Supported LBA-Change 00:19:04.468 Unknown (1Dh): Supported LBA-Change 00:19:04.468 00:19:04.468 Error Log 00:19:04.468 ========= 00:19:04.468 00:19:04.468 Arbitration 00:19:04.468 =========== 00:19:04.468 Arbitration Burst: no limit 00:19:04.468 00:19:04.468 Power Management 00:19:04.468 ================ 00:19:04.468 Number of Power States: 1 00:19:04.468 Current Power State: Power State #0 00:19:04.468 Power State #0: 00:19:04.468 Max Power: 25.00 W 00:19:04.468 Non-Operational State: Operational 00:19:04.468 Entry Latency: 16 microseconds 00:19:04.468 Exit Latency: 4 microseconds 00:19:04.468 Relative Read Throughput: 0 00:19:04.468 Relative Read Latency: 0 00:19:04.468 Relative Write Throughput: 0 00:19:04.468 Relative Write Latency: 0 00:19:04.468 Idle Power: Not Reported 00:19:04.468 Active Power: Not Reported 00:19:04.468 Non-Operational Permissive Mode: Not Supported 00:19:04.468 00:19:04.468 Health Information 00:19:04.468 ================== 00:19:04.468 Critical Warnings: 00:19:04.468 Available Spare Space: OK 00:19:04.468 Temperature: OK 00:19:04.468 Device Reliability: OK 00:19:04.468 Read Only: No 00:19:04.468 Volatile Memory Backup: OK 00:19:04.468 Current Temperature: 323 Kelvin (50 Celsius) 00:19:04.468 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:04.468 Available Spare: 0% 00:19:04.468 Available Spare Threshold: 0% 00:19:04.468 Life Percentage Used: 0% 00:19:04.468 Data Units Read: 2183 00:19:04.468 Data Units Written: 1863 00:19:04.468 Host Read Commands: 105780 00:19:04.468 Host Write Commands: 101550 00:19:04.468 Controller Busy Time: 0 minutes 00:19:04.468 Power Cycles: 0 00:19:04.468 Power On Hours: 0 hours 00:19:04.468 Unsafe Shutdowns: 0 00:19:04.468 Unrecoverable Media Errors: 0 00:19:04.468 Lifetime Error Log Entries: 0 00:19:04.468 Warning Temperature Time: 0 minutes 00:19:04.468 Critical Temperature Time: 0 minutes 00:19:04.468 00:19:04.468 Number of Queues 00:19:04.468 ================ 00:19:04.468 Number of I/O Submission Queues: 64 00:19:04.468 Number of I/O Completion Queues: 64 00:19:04.468 00:19:04.469 ZNS Specific Controller Data 00:19:04.469 ============================ 00:19:04.469 Zone Append Size Limit: 0 00:19:04.469 00:19:04.469 00:19:04.469 Active Namespaces 00:19:04.469 ================= 00:19:04.469 Namespace ID:1 00:19:04.469 Error Recovery Timeout: Unlimited 00:19:04.469 Command Set Identifier: NVM (00h) 00:19:04.469 Deallocate: Supported 00:19:04.469 Deallocated/Unwritten Error: Supported 00:19:04.469 Deallocated Read Value: All 0x00 00:19:04.469 Deallocate in Write Zeroes: Not Supported 00:19:04.469 Deallocated Guard Field: 0xFFFF 00:19:04.469 Flush: Supported 00:19:04.469 Reservation: Not Supported 00:19:04.469 Namespace Sharing Capabilities: Private 00:19:04.469 Size (in LBAs): 1048576 (4GiB) 00:19:04.469 Capacity (in LBAs): 1048576 (4GiB) 00:19:04.469 Utilization (in LBAs): 1048576 (4GiB) 00:19:04.469 Thin Provisioning: Not Supported 00:19:04.469 Per-NS Atomic Units: No 00:19:04.469 Maximum Single Source Range Length: 128 00:19:04.469 Maximum Copy Length: 128 00:19:04.469 Maximum Source Range Count: 128 00:19:04.469 NGUID/EUI64 Never Reused: No 00:19:04.469 Namespace Write Protected: No 00:19:04.469 Number of LBA Formats: 8 00:19:04.469 Current LBA Format: LBA Format #04 00:19:04.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.469 00:19:04.469 NVM Specific Namespace Data 00:19:04.469 =========================== 00:19:04.469 Logical Block Storage Tag Mask: 0 00:19:04.469 Protection Information Capabilities: 00:19:04.469 16b Guard Protection Information Storage Tag Support: No 00:19:04.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.469 Storage Tag Check Read Support: No 00:19:04.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Namespace ID:2 00:19:04.469 Error Recovery Timeout: Unlimited 00:19:04.469 Command Set Identifier: NVM (00h) 00:19:04.469 Deallocate: Supported 00:19:04.469 Deallocated/Unwritten Error: Supported 00:19:04.469 Deallocated Read Value: All 0x00 00:19:04.469 Deallocate in Write Zeroes: Not Supported 00:19:04.469 Deallocated Guard Field: 0xFFFF 00:19:04.469 Flush: Supported 00:19:04.469 Reservation: Not Supported 00:19:04.469 Namespace Sharing Capabilities: Private 00:19:04.469 Size (in LBAs): 1048576 (4GiB) 00:19:04.469 Capacity (in LBAs): 1048576 (4GiB) 00:19:04.469 Utilization (in LBAs): 1048576 (4GiB) 00:19:04.469 Thin Provisioning: Not Supported 00:19:04.469 Per-NS Atomic Units: No 00:19:04.469 Maximum Single Source Range Length: 128 00:19:04.469 Maximum Copy Length: 128 00:19:04.469 Maximum Source Range Count: 128 00:19:04.469 NGUID/EUI64 Never Reused: No 00:19:04.469 Namespace Write Protected: No 00:19:04.469 Number of LBA Formats: 8 00:19:04.469 Current LBA Format: LBA Format #04 00:19:04.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.469 00:19:04.469 NVM Specific Namespace Data 00:19:04.469 =========================== 00:19:04.469 Logical Block Storage Tag Mask: 0 00:19:04.469 Protection Information Capabilities: 00:19:04.469 16b Guard Protection Information Storage Tag Support: No 00:19:04.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.469 Storage Tag Check Read Support: No 00:19:04.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Namespace ID:3 00:19:04.469 Error Recovery Timeout: Unlimited 00:19:04.469 Command Set Identifier: NVM (00h) 00:19:04.469 Deallocate: Supported 00:19:04.469 Deallocated/Unwritten Error: Supported 00:19:04.469 Deallocated Read Value: All 0x00 00:19:04.469 Deallocate in Write Zeroes: Not Supported 00:19:04.469 Deallocated Guard Field: 0xFFFF 00:19:04.469 Flush: Supported 00:19:04.469 Reservation: Not Supported 00:19:04.469 Namespace Sharing Capabilities: Private 00:19:04.469 Size (in LBAs): 1048576 (4GiB) 00:19:04.469 Capacity (in LBAs): 1048576 (4GiB) 00:19:04.469 Utilization (in LBAs): 1048576 (4GiB) 00:19:04.469 Thin Provisioning: Not Supported 00:19:04.469 Per-NS Atomic Units: No 00:19:04.469 Maximum Single Source Range Length: 128 00:19:04.469 Maximum Copy Length: 128 00:19:04.469 Maximum Source Range Count: 128 00:19:04.469 NGUID/EUI64 Never Reused: No 00:19:04.469 Namespace Write Protected: No 00:19:04.469 Number of LBA Formats: 8 00:19:04.469 Current LBA Format: LBA Format #04 00:19:04.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.469 00:19:04.469 NVM Specific Namespace Data 00:19:04.469 =========================== 00:19:04.469 Logical Block Storage Tag Mask: 0 00:19:04.469 Protection Information Capabilities: 00:19:04.469 16b Guard Protection Information Storage Tag Support: No 00:19:04.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.469 Storage Tag Check Read Support: No 00:19:04.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.470 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.470 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.470 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.470 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.470 11:48:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:04.470 11:48:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:19:04.735 ===================================================== 00:19:04.735 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:04.735 ===================================================== 00:19:04.735 Controller Capabilities/Features 00:19:04.735 ================================ 00:19:04.735 Vendor ID: 1b36 00:19:04.735 Subsystem Vendor ID: 1af4 00:19:04.735 Serial Number: 12343 00:19:04.735 Model Number: QEMU NVMe Ctrl 00:19:04.735 Firmware Version: 8.0.0 00:19:04.735 Recommended Arb Burst: 6 00:19:04.735 IEEE OUI Identifier: 00 54 52 00:19:04.735 Multi-path I/O 00:19:04.735 May have multiple subsystem ports: No 00:19:04.735 May have multiple controllers: Yes 00:19:04.735 Associated with SR-IOV VF: No 00:19:04.735 Max Data Transfer Size: 524288 00:19:04.735 Max Number of Namespaces: 256 00:19:04.735 Max Number of I/O Queues: 64 00:19:04.735 NVMe Specification Version (VS): 1.4 00:19:04.735 NVMe Specification Version (Identify): 1.4 00:19:04.735 Maximum Queue Entries: 2048 00:19:04.735 Contiguous Queues Required: Yes 00:19:04.735 Arbitration Mechanisms Supported 00:19:04.735 Weighted Round Robin: Not Supported 00:19:04.735 Vendor Specific: Not Supported 00:19:04.735 Reset Timeout: 7500 ms 00:19:04.735 Doorbell Stride: 4 bytes 00:19:04.735 NVM Subsystem Reset: Not Supported 00:19:04.735 Command Sets Supported 00:19:04.735 NVM Command Set: Supported 00:19:04.735 Boot Partition: Not Supported 00:19:04.735 Memory Page Size Minimum: 4096 bytes 00:19:04.735 Memory Page Size Maximum: 65536 bytes 00:19:04.735 Persistent Memory Region: Not Supported 00:19:04.735 Optional Asynchronous Events Supported 00:19:04.735 Namespace Attribute Notices: Supported 00:19:04.735 Firmware Activation Notices: Not Supported 00:19:04.735 ANA Change Notices: Not Supported 00:19:04.735 PLE Aggregate Log Change Notices: Not Supported 00:19:04.735 LBA Status Info Alert Notices: Not Supported 00:19:04.735 EGE Aggregate Log Change Notices: Not Supported 00:19:04.735 Normal NVM Subsystem Shutdown event: Not Supported 00:19:04.735 Zone Descriptor Change Notices: Not Supported 00:19:04.735 Discovery Log Change Notices: Not Supported 00:19:04.735 Controller Attributes 00:19:04.735 128-bit Host Identifier: Not Supported 00:19:04.735 Non-Operational Permissive Mode: Not Supported 00:19:04.735 NVM Sets: Not Supported 00:19:04.735 Read Recovery Levels: Not Supported 00:19:04.735 Endurance Groups: Supported 00:19:04.735 Predictable Latency Mode: Not Supported 00:19:04.735 Traffic Based Keep ALive: Not Supported 00:19:04.735 Namespace Granularity: Not Supported 00:19:04.735 SQ Associations: Not Supported 00:19:04.735 UUID List: Not Supported 00:19:04.735 Multi-Domain Subsystem: Not Supported 00:19:04.735 Fixed Capacity Management: Not Supported 00:19:04.735 Variable Capacity Management: Not Supported 00:19:04.735 Delete Endurance Group: Not Supported 00:19:04.735 Delete NVM Set: Not Supported 00:19:04.735 Extended LBA Formats Supported: Supported 00:19:04.735 Flexible Data Placement Supported: Supported 00:19:04.735 00:19:04.735 Controller Memory Buffer Support 00:19:04.735 ================================ 00:19:04.735 Supported: No 00:19:04.735 00:19:04.735 Persistent Memory Region Support 00:19:04.735 ================================ 00:19:04.735 Supported: No 00:19:04.735 00:19:04.735 Admin Command Set Attributes 00:19:04.735 ============================ 00:19:04.735 Security Send/Receive: Not Supported 00:19:04.735 Format NVM: Supported 00:19:04.735 Firmware Activate/Download: Not Supported 00:19:04.735 Namespace Management: Supported 00:19:04.735 Device Self-Test: Not Supported 00:19:04.735 Directives: Supported 00:19:04.735 NVMe-MI: Not Supported 00:19:04.735 Virtualization Management: Not Supported 00:19:04.735 Doorbell Buffer Config: Supported 00:19:04.735 Get LBA Status Capability: Not Supported 00:19:04.735 Command & Feature Lockdown Capability: Not Supported 00:19:04.735 Abort Command Limit: 4 00:19:04.735 Async Event Request Limit: 4 00:19:04.735 Number of Firmware Slots: N/A 00:19:04.735 Firmware Slot 1 Read-Only: N/A 00:19:04.735 Firmware Activation Without Reset: N/A 00:19:04.735 Multiple Update Detection Support: N/A 00:19:04.735 Firmware Update Granularity: No Information Provided 00:19:04.735 Per-Namespace SMART Log: Yes 00:19:04.735 Asymmetric Namespace Access Log Page: Not Supported 00:19:04.735 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:04.735 Command Effects Log Page: Supported 00:19:04.735 Get Log Page Extended Data: Supported 00:19:04.735 Telemetry Log Pages: Not Supported 00:19:04.735 Persistent Event Log Pages: Not Supported 00:19:04.735 Supported Log Pages Log Page: May Support 00:19:04.735 Commands Supported & Effects Log Page: Not Supported 00:19:04.735 Feature Identifiers & Effects Log Page:May Support 00:19:04.735 NVMe-MI Commands & Effects Log Page: May Support 00:19:04.735 Data Area 4 for Telemetry Log: Not Supported 00:19:04.735 Error Log Page Entries Supported: 1 00:19:04.735 Keep Alive: Not Supported 00:19:04.735 00:19:04.735 NVM Command Set Attributes 00:19:04.735 ========================== 00:19:04.735 Submission Queue Entry Size 00:19:04.735 Max: 64 00:19:04.735 Min: 64 00:19:04.735 Completion Queue Entry Size 00:19:04.735 Max: 16 00:19:04.735 Min: 16 00:19:04.735 Number of Namespaces: 256 00:19:04.735 Compare Command: Supported 00:19:04.735 Write Uncorrectable Command: Not Supported 00:19:04.735 Dataset Management Command: Supported 00:19:04.735 Write Zeroes Command: Supported 00:19:04.735 Set Features Save Field: Supported 00:19:04.735 Reservations: Not Supported 00:19:04.735 Timestamp: Supported 00:19:04.735 Copy: Supported 00:19:04.735 Volatile Write Cache: Present 00:19:04.735 Atomic Write Unit (Normal): 1 00:19:04.735 Atomic Write Unit (PFail): 1 00:19:04.735 Atomic Compare & Write Unit: 1 00:19:04.735 Fused Compare & Write: Not Supported 00:19:04.735 Scatter-Gather List 00:19:04.735 SGL Command Set: Supported 00:19:04.735 SGL Keyed: Not Supported 00:19:04.735 SGL Bit Bucket Descriptor: Not Supported 00:19:04.735 SGL Metadata Pointer: Not Supported 00:19:04.735 Oversized SGL: Not Supported 00:19:04.735 SGL Metadata Address: Not Supported 00:19:04.735 SGL Offset: Not Supported 00:19:04.735 Transport SGL Data Block: Not Supported 00:19:04.735 Replay Protected Memory Block: Not Supported 00:19:04.736 00:19:04.736 Firmware Slot Information 00:19:04.736 ========================= 00:19:04.736 Active slot: 1 00:19:04.736 Slot 1 Firmware Revision: 1.0 00:19:04.736 00:19:04.736 00:19:04.736 Commands Supported and Effects 00:19:04.736 ============================== 00:19:04.736 Admin Commands 00:19:04.736 -------------- 00:19:04.736 Delete I/O Submission Queue (00h): Supported 00:19:04.736 Create I/O Submission Queue (01h): Supported 00:19:04.736 Get Log Page (02h): Supported 00:19:04.736 Delete I/O Completion Queue (04h): Supported 00:19:04.736 Create I/O Completion Queue (05h): Supported 00:19:04.736 Identify (06h): Supported 00:19:04.736 Abort (08h): Supported 00:19:04.736 Set Features (09h): Supported 00:19:04.736 Get Features (0Ah): Supported 00:19:04.736 Asynchronous Event Request (0Ch): Supported 00:19:04.736 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:04.736 Directive Send (19h): Supported 00:19:04.736 Directive Receive (1Ah): Supported 00:19:04.736 Virtualization Management (1Ch): Supported 00:19:04.736 Doorbell Buffer Config (7Ch): Supported 00:19:04.736 Format NVM (80h): Supported LBA-Change 00:19:04.736 I/O Commands 00:19:04.736 ------------ 00:19:04.736 Flush (00h): Supported LBA-Change 00:19:04.736 Write (01h): Supported LBA-Change 00:19:04.736 Read (02h): Supported 00:19:04.736 Compare (05h): Supported 00:19:04.736 Write Zeroes (08h): Supported LBA-Change 00:19:04.736 Dataset Management (09h): Supported LBA-Change 00:19:04.736 Unknown (0Ch): Supported 00:19:04.736 Unknown (12h): Supported 00:19:04.736 Copy (19h): Supported LBA-Change 00:19:04.736 Unknown (1Dh): Supported LBA-Change 00:19:04.736 00:19:04.736 Error Log 00:19:04.736 ========= 00:19:04.736 00:19:04.736 Arbitration 00:19:04.736 =========== 00:19:04.736 Arbitration Burst: no limit 00:19:04.736 00:19:04.736 Power Management 00:19:04.736 ================ 00:19:04.736 Number of Power States: 1 00:19:04.736 Current Power State: Power State #0 00:19:04.736 Power State #0: 00:19:04.736 Max Power: 25.00 W 00:19:04.736 Non-Operational State: Operational 00:19:04.736 Entry Latency: 16 microseconds 00:19:04.736 Exit Latency: 4 microseconds 00:19:04.736 Relative Read Throughput: 0 00:19:04.736 Relative Read Latency: 0 00:19:04.736 Relative Write Throughput: 0 00:19:04.736 Relative Write Latency: 0 00:19:04.736 Idle Power: Not Reported 00:19:04.736 Active Power: Not Reported 00:19:04.736 Non-Operational Permissive Mode: Not Supported 00:19:04.736 00:19:04.736 Health Information 00:19:04.736 ================== 00:19:04.736 Critical Warnings: 00:19:04.736 Available Spare Space: OK 00:19:04.736 Temperature: OK 00:19:04.736 Device Reliability: OK 00:19:04.736 Read Only: No 00:19:04.736 Volatile Memory Backup: OK 00:19:04.736 Current Temperature: 323 Kelvin (50 Celsius) 00:19:04.736 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:04.736 Available Spare: 0% 00:19:04.736 Available Spare Threshold: 0% 00:19:04.736 Life Percentage Used: 0% 00:19:04.736 Data Units Read: 788 00:19:04.736 Data Units Written: 682 00:19:04.736 Host Read Commands: 35832 00:19:04.736 Host Write Commands: 34422 00:19:04.736 Controller Busy Time: 0 minutes 00:19:04.736 Power Cycles: 0 00:19:04.736 Power On Hours: 0 hours 00:19:04.736 Unsafe Shutdowns: 0 00:19:04.736 Unrecoverable Media Errors: 0 00:19:04.736 Lifetime Error Log Entries: 0 00:19:04.736 Warning Temperature Time: 0 minutes 00:19:04.736 Critical Temperature Time: 0 minutes 00:19:04.736 00:19:04.736 Number of Queues 00:19:04.736 ================ 00:19:04.736 Number of I/O Submission Queues: 64 00:19:04.736 Number of I/O Completion Queues: 64 00:19:04.736 00:19:04.736 ZNS Specific Controller Data 00:19:04.736 ============================ 00:19:04.736 Zone Append Size Limit: 0 00:19:04.736 00:19:04.736 00:19:04.736 Active Namespaces 00:19:04.736 ================= 00:19:04.736 Namespace ID:1 00:19:04.736 Error Recovery Timeout: Unlimited 00:19:04.736 Command Set Identifier: NVM (00h) 00:19:04.736 Deallocate: Supported 00:19:04.736 Deallocated/Unwritten Error: Supported 00:19:04.736 Deallocated Read Value: All 0x00 00:19:04.736 Deallocate in Write Zeroes: Not Supported 00:19:04.736 Deallocated Guard Field: 0xFFFF 00:19:04.736 Flush: Supported 00:19:04.736 Reservation: Not Supported 00:19:04.736 Namespace Sharing Capabilities: Multiple Controllers 00:19:04.736 Size (in LBAs): 262144 (1GiB) 00:19:04.736 Capacity (in LBAs): 262144 (1GiB) 00:19:04.736 Utilization (in LBAs): 262144 (1GiB) 00:19:04.736 Thin Provisioning: Not Supported 00:19:04.736 Per-NS Atomic Units: No 00:19:04.736 Maximum Single Source Range Length: 128 00:19:04.736 Maximum Copy Length: 128 00:19:04.736 Maximum Source Range Count: 128 00:19:04.736 NGUID/EUI64 Never Reused: No 00:19:04.736 Namespace Write Protected: No 00:19:04.736 Endurance group ID: 1 00:19:04.736 Number of LBA Formats: 8 00:19:04.736 Current LBA Format: LBA Format #04 00:19:04.736 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.736 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.736 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.736 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.736 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.736 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.736 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.736 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.736 00:19:04.736 Get Feature FDP: 00:19:04.736 ================ 00:19:04.736 Enabled: Yes 00:19:04.736 FDP configuration index: 0 00:19:04.736 00:19:04.736 FDP configurations log page 00:19:04.736 =========================== 00:19:04.736 Number of FDP configurations: 1 00:19:04.736 Version: 0 00:19:04.736 Size: 112 00:19:04.736 FDP Configuration Descriptor: 0 00:19:04.736 Descriptor Size: 96 00:19:04.736 Reclaim Group Identifier format: 2 00:19:04.736 FDP Volatile Write Cache: Not Present 00:19:04.736 FDP Configuration: Valid 00:19:04.736 Vendor Specific Size: 0 00:19:04.736 Number of Reclaim Groups: 2 00:19:04.736 Number of Recalim Unit Handles: 8 00:19:04.736 Max Placement Identifiers: 128 00:19:04.736 Number of Namespaces Suppprted: 256 00:19:04.736 Reclaim unit Nominal Size: 6000000 bytes 00:19:04.736 Estimated Reclaim Unit Time Limit: Not Reported 00:19:04.736 RUH Desc #000: RUH Type: Initially Isolated 00:19:04.736 RUH Desc #001: RUH Type: Initially Isolated 00:19:04.736 RUH Desc #002: RUH Type: Initially Isolated 00:19:04.736 RUH Desc #003: RUH Type: Initially Isolated 00:19:04.736 RUH Desc #004: RUH Type: Initially Isolated 00:19:04.736 RUH Desc #005: RUH Type: Initially Isolated 00:19:04.736 RUH Desc #006: RUH Type: Initially Isolated 00:19:04.737 RUH Desc #007: RUH Type: Initially Isolated 00:19:04.737 00:19:04.737 FDP reclaim unit handle usage log page 00:19:04.737 ====================================== 00:19:04.737 Number of Reclaim Unit Handles: 8 00:19:04.737 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:04.737 RUH Usage Desc #001: RUH Attributes: Unused 00:19:04.737 RUH Usage Desc #002: RUH Attributes: Unused 00:19:04.737 RUH Usage Desc #003: RUH Attributes: Unused 00:19:04.737 RUH Usage Desc #004: RUH Attributes: Unused 00:19:04.737 RUH Usage Desc #005: RUH Attributes: Unused 00:19:04.737 RUH Usage Desc #006: RUH Attributes: Unused 00:19:04.737 RUH Usage Desc #007: RUH Attributes: Unused 00:19:04.737 00:19:04.737 FDP statistics log page 00:19:04.737 ======================= 00:19:04.737 Host bytes with metadata written: 428580864 00:19:04.737 Media bytes with metadata written: 428625920 00:19:04.737 Media bytes erased: 0 00:19:04.737 00:19:04.737 FDP events log page 00:19:04.737 =================== 00:19:04.737 Number of FDP events: 0 00:19:04.737 00:19:04.737 NVM Specific Namespace Data 00:19:04.737 =========================== 00:19:04.737 Logical Block Storage Tag Mask: 0 00:19:04.737 Protection Information Capabilities: 00:19:04.737 16b Guard Protection Information Storage Tag Support: No 00:19:04.737 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.737 Storage Tag Check Read Support: No 00:19:04.737 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.737 00:19:04.737 real 0m1.667s 00:19:04.737 user 0m0.706s 00:19:04.737 sys 0m0.760s 00:19:04.737 11:48:01 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.737 11:48:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:04.737 ************************************ 00:19:04.737 END TEST nvme_identify 00:19:04.737 ************************************ 00:19:04.737 11:48:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:04.737 11:48:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:04.737 11:48:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.737 11:48:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.737 ************************************ 00:19:04.737 START TEST nvme_perf 00:19:04.737 ************************************ 00:19:04.737 11:48:01 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:19:04.737 11:48:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:06.115 Initializing NVMe Controllers 00:19:06.115 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:06.115 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:06.115 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:06.115 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:06.115 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:06.115 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:06.115 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:06.115 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:06.115 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:06.115 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:06.115 Initialization complete. Launching workers. 00:19:06.115 ======================================================== 00:19:06.115 Latency(us) 00:19:06.115 Device Information : IOPS MiB/s Average min max 00:19:06.115 PCIE (0000:00:10.0) NSID 1 from core 0: 12741.97 149.32 10064.11 7556.05 42233.97 00:19:06.115 PCIE (0000:00:11.0) NSID 1 from core 0: 12741.97 149.32 10042.40 7646.03 39853.87 00:19:06.115 PCIE (0000:00:13.0) NSID 1 from core 0: 12741.97 149.32 10018.35 7686.42 37789.33 00:19:06.115 PCIE (0000:00:12.0) NSID 1 from core 0: 12741.97 149.32 9994.31 7740.99 35301.47 00:19:06.115 PCIE (0000:00:12.0) NSID 2 from core 0: 12741.97 149.32 9970.37 7715.56 32916.89 00:19:06.115 PCIE (0000:00:12.0) NSID 3 from core 0: 12741.97 149.32 9946.36 7686.85 30092.50 00:19:06.115 ======================================================== 00:19:06.115 Total : 76451.84 895.92 10005.98 7556.05 42233.97 00:19:06.115 00:19:06.115 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:06.115 ================================================================================= 00:19:06.115 1.00000% : 8102.633us 00:19:06.115 10.00000% : 8817.571us 00:19:06.115 25.00000% : 9175.040us 00:19:06.115 50.00000% : 9651.665us 00:19:06.115 75.00000% : 10187.869us 00:19:06.115 90.00000% : 11439.011us 00:19:06.115 95.00000% : 12034.793us 00:19:06.115 98.00000% : 12988.044us 00:19:06.115 99.00000% : 14239.185us 00:19:06.115 99.50000% : 34078.720us 00:19:06.115 99.90000% : 41704.727us 00:19:06.115 99.99000% : 42181.353us 00:19:06.115 99.99900% : 42419.665us 00:19:06.115 99.99990% : 42419.665us 00:19:06.115 99.99999% : 42419.665us 00:19:06.115 00:19:06.115 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:06.115 ================================================================================= 00:19:06.115 1.00000% : 8221.789us 00:19:06.115 10.00000% : 8877.149us 00:19:06.115 25.00000% : 9175.040us 00:19:06.115 50.00000% : 9592.087us 00:19:06.115 75.00000% : 10128.291us 00:19:06.115 90.00000% : 11379.433us 00:19:06.115 95.00000% : 12094.371us 00:19:06.115 98.00000% : 12928.465us 00:19:06.115 99.00000% : 14179.607us 00:19:06.115 99.50000% : 31695.593us 00:19:06.115 99.90000% : 39559.913us 00:19:06.115 99.99000% : 40036.538us 00:19:06.115 99.99900% : 40036.538us 00:19:06.115 99.99990% : 40036.538us 00:19:06.115 99.99999% : 40036.538us 00:19:06.115 00:19:06.115 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:06.115 ================================================================================= 00:19:06.115 1.00000% : 8221.789us 00:19:06.115 10.00000% : 8877.149us 00:19:06.115 25.00000% : 9175.040us 00:19:06.115 50.00000% : 9592.087us 00:19:06.115 75.00000% : 10128.291us 00:19:06.115 90.00000% : 11439.011us 00:19:06.115 95.00000% : 12094.371us 00:19:06.115 98.00000% : 12928.465us 00:19:06.115 99.00000% : 13583.825us 00:19:06.115 99.50000% : 29431.622us 00:19:06.115 99.90000% : 37415.098us 00:19:06.115 99.99000% : 37891.724us 00:19:06.115 99.99900% : 37891.724us 00:19:06.115 99.99990% : 37891.724us 00:19:06.115 99.99999% : 37891.724us 00:19:06.115 00:19:06.115 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:06.115 ================================================================================= 00:19:06.115 1.00000% : 8221.789us 00:19:06.115 10.00000% : 8877.149us 00:19:06.115 25.00000% : 9175.040us 00:19:06.115 50.00000% : 9592.087us 00:19:06.115 75.00000% : 10128.291us 00:19:06.115 90.00000% : 11379.433us 00:19:06.115 95.00000% : 12094.371us 00:19:06.115 98.00000% : 12988.044us 00:19:06.115 99.00000% : 13822.138us 00:19:06.115 99.50000% : 27048.495us 00:19:06.115 99.90000% : 35031.971us 00:19:06.115 99.99000% : 35270.284us 00:19:06.115 99.99900% : 35508.596us 00:19:06.115 99.99990% : 35508.596us 00:19:06.115 99.99999% : 35508.596us 00:19:06.115 00:19:06.115 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:06.115 ================================================================================= 00:19:06.115 1.00000% : 8221.789us 00:19:06.115 10.00000% : 8877.149us 00:19:06.115 25.00000% : 9175.040us 00:19:06.115 50.00000% : 9592.087us 00:19:06.115 75.00000% : 10128.291us 00:19:06.115 90.00000% : 11379.433us 00:19:06.115 95.00000% : 11975.215us 00:19:06.115 98.00000% : 12928.465us 00:19:06.115 99.00000% : 13822.138us 00:19:06.115 99.50000% : 24665.367us 00:19:06.115 99.90000% : 32648.844us 00:19:06.115 99.99000% : 32887.156us 00:19:06.115 99.99900% : 33125.469us 00:19:06.115 99.99990% : 33125.469us 00:19:06.115 99.99999% : 33125.469us 00:19:06.115 00:19:06.115 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:06.115 ================================================================================= 00:19:06.115 1.00000% : 8221.789us 00:19:06.115 10.00000% : 8877.149us 00:19:06.115 25.00000% : 9175.040us 00:19:06.115 50.00000% : 9592.087us 00:19:06.115 75.00000% : 10128.291us 00:19:06.115 90.00000% : 11379.433us 00:19:06.115 95.00000% : 11975.215us 00:19:06.115 98.00000% : 12928.465us 00:19:06.115 99.00000% : 14000.873us 00:19:06.115 99.50000% : 22282.240us 00:19:06.115 99.90000% : 29669.935us 00:19:06.115 99.99000% : 30146.560us 00:19:06.115 99.99900% : 30146.560us 00:19:06.115 99.99990% : 30146.560us 00:19:06.115 99.99999% : 30146.560us 00:19:06.115 00:19:06.115 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:06.115 ============================================================================== 00:19:06.115 Range in us Cumulative IO count 00:19:06.115 7536.640 - 7566.429: 0.0078% ( 1) 00:19:06.115 7566.429 - 7596.218: 0.0156% ( 1) 00:19:06.115 7596.218 - 7626.007: 0.0234% ( 1) 00:19:06.115 7626.007 - 7685.585: 0.0859% ( 8) 00:19:06.115 7685.585 - 7745.164: 0.1562% ( 9) 00:19:06.116 7745.164 - 7804.742: 0.2500% ( 12) 00:19:06.116 7804.742 - 7864.320: 0.3594% ( 14) 00:19:06.116 7864.320 - 7923.898: 0.4922% ( 17) 00:19:06.116 7923.898 - 7983.476: 0.6250% ( 17) 00:19:06.116 7983.476 - 8043.055: 0.7969% ( 22) 00:19:06.116 8043.055 - 8102.633: 1.0078% ( 27) 00:19:06.116 8102.633 - 8162.211: 1.2500% ( 31) 00:19:06.116 8162.211 - 8221.789: 1.5000% ( 32) 00:19:06.116 8221.789 - 8281.367: 1.7891% ( 37) 00:19:06.116 8281.367 - 8340.945: 2.0703% ( 36) 00:19:06.116 8340.945 - 8400.524: 2.4297% ( 46) 00:19:06.116 8400.524 - 8460.102: 2.8438% ( 53) 00:19:06.116 8460.102 - 8519.680: 3.3906% ( 70) 00:19:06.116 8519.680 - 8579.258: 4.2344% ( 108) 00:19:06.116 8579.258 - 8638.836: 5.3516% ( 143) 00:19:06.116 8638.836 - 8698.415: 6.8984% ( 198) 00:19:06.116 8698.415 - 8757.993: 8.8438% ( 249) 00:19:06.116 8757.993 - 8817.571: 11.0938% ( 288) 00:19:06.116 8817.571 - 8877.149: 13.4297% ( 299) 00:19:06.116 8877.149 - 8936.727: 15.9688% ( 325) 00:19:06.116 8936.727 - 8996.305: 18.7891% ( 361) 00:19:06.116 8996.305 - 9055.884: 21.7891% ( 384) 00:19:06.116 9055.884 - 9115.462: 24.8906% ( 397) 00:19:06.116 9115.462 - 9175.040: 27.8516% ( 379) 00:19:06.116 9175.040 - 9234.618: 30.9141% ( 392) 00:19:06.116 9234.618 - 9294.196: 34.0000% ( 395) 00:19:06.116 9294.196 - 9353.775: 36.8828% ( 369) 00:19:06.116 9353.775 - 9413.353: 40.0312% ( 403) 00:19:06.116 9413.353 - 9472.931: 43.1484% ( 399) 00:19:06.116 9472.931 - 9532.509: 46.2031% ( 391) 00:19:06.116 9532.509 - 9592.087: 49.3438% ( 402) 00:19:06.116 9592.087 - 9651.665: 52.3516% ( 385) 00:19:06.116 9651.665 - 9711.244: 55.7031% ( 429) 00:19:06.116 9711.244 - 9770.822: 58.8672% ( 405) 00:19:06.116 9770.822 - 9830.400: 61.8906% ( 387) 00:19:06.116 9830.400 - 9889.978: 64.9141% ( 387) 00:19:06.116 9889.978 - 9949.556: 67.7031% ( 357) 00:19:06.116 9949.556 - 10009.135: 70.0234% ( 297) 00:19:06.116 10009.135 - 10068.713: 72.1172% ( 268) 00:19:06.116 10068.713 - 10128.291: 73.9062% ( 229) 00:19:06.116 10128.291 - 10187.869: 75.3672% ( 187) 00:19:06.116 10187.869 - 10247.447: 76.6172% ( 160) 00:19:06.116 10247.447 - 10307.025: 77.7422% ( 144) 00:19:06.116 10307.025 - 10366.604: 78.6094% ( 111) 00:19:06.116 10366.604 - 10426.182: 79.3906% ( 100) 00:19:06.116 10426.182 - 10485.760: 80.1953% ( 103) 00:19:06.116 10485.760 - 10545.338: 80.9688% ( 99) 00:19:06.116 10545.338 - 10604.916: 81.7031% ( 94) 00:19:06.116 10604.916 - 10664.495: 82.3906% ( 88) 00:19:06.116 10664.495 - 10724.073: 83.0312% ( 82) 00:19:06.116 10724.073 - 10783.651: 83.6875% ( 84) 00:19:06.116 10783.651 - 10843.229: 84.4141% ( 93) 00:19:06.116 10843.229 - 10902.807: 85.0391% ( 80) 00:19:06.116 10902.807 - 10962.385: 85.6719% ( 81) 00:19:06.116 10962.385 - 11021.964: 86.2812% ( 78) 00:19:06.116 11021.964 - 11081.542: 86.9219% ( 82) 00:19:06.116 11081.542 - 11141.120: 87.5391% ( 79) 00:19:06.116 11141.120 - 11200.698: 88.1875% ( 83) 00:19:06.116 11200.698 - 11260.276: 88.7734% ( 75) 00:19:06.116 11260.276 - 11319.855: 89.3594% ( 75) 00:19:06.116 11319.855 - 11379.433: 89.9062% ( 70) 00:19:06.116 11379.433 - 11439.011: 90.4219% ( 66) 00:19:06.116 11439.011 - 11498.589: 90.9453% ( 67) 00:19:06.116 11498.589 - 11558.167: 91.4531% ( 65) 00:19:06.116 11558.167 - 11617.745: 91.9141% ( 59) 00:19:06.116 11617.745 - 11677.324: 92.4531% ( 69) 00:19:06.116 11677.324 - 11736.902: 93.0234% ( 73) 00:19:06.116 11736.902 - 11796.480: 93.5156% ( 63) 00:19:06.116 11796.480 - 11856.058: 93.9531% ( 56) 00:19:06.116 11856.058 - 11915.636: 94.3438% ( 50) 00:19:06.116 11915.636 - 11975.215: 94.7031% ( 46) 00:19:06.116 11975.215 - 12034.793: 95.0391% ( 43) 00:19:06.116 12034.793 - 12094.371: 95.3516% ( 40) 00:19:06.116 12094.371 - 12153.949: 95.6250% ( 35) 00:19:06.116 12153.949 - 12213.527: 95.8984% ( 35) 00:19:06.116 12213.527 - 12273.105: 96.1172% ( 28) 00:19:06.116 12273.105 - 12332.684: 96.3516% ( 30) 00:19:06.116 12332.684 - 12392.262: 96.5469% ( 25) 00:19:06.116 12392.262 - 12451.840: 96.7500% ( 26) 00:19:06.116 12451.840 - 12511.418: 96.8906% ( 18) 00:19:06.116 12511.418 - 12570.996: 97.0547% ( 21) 00:19:06.116 12570.996 - 12630.575: 97.1875% ( 17) 00:19:06.116 12630.575 - 12690.153: 97.3203% ( 17) 00:19:06.116 12690.153 - 12749.731: 97.4688% ( 19) 00:19:06.116 12749.731 - 12809.309: 97.6016% ( 17) 00:19:06.116 12809.309 - 12868.887: 97.7500% ( 19) 00:19:06.116 12868.887 - 12928.465: 97.9062% ( 20) 00:19:06.116 12928.465 - 12988.044: 98.0156% ( 14) 00:19:06.116 12988.044 - 13047.622: 98.1328% ( 15) 00:19:06.116 13047.622 - 13107.200: 98.2656% ( 17) 00:19:06.116 13107.200 - 13166.778: 98.3750% ( 14) 00:19:06.116 13166.778 - 13226.356: 98.4609% ( 11) 00:19:06.116 13226.356 - 13285.935: 98.5234% ( 8) 00:19:06.116 13285.935 - 13345.513: 98.6016% ( 10) 00:19:06.116 13345.513 - 13405.091: 98.6562% ( 7) 00:19:06.116 13405.091 - 13464.669: 98.7109% ( 7) 00:19:06.116 13464.669 - 13524.247: 98.7422% ( 4) 00:19:06.116 13524.247 - 13583.825: 98.7734% ( 4) 00:19:06.116 13583.825 - 13643.404: 98.7891% ( 2) 00:19:06.116 13643.404 - 13702.982: 98.8125% ( 3) 00:19:06.116 13702.982 - 13762.560: 98.8359% ( 3) 00:19:06.116 13762.560 - 13822.138: 98.8594% ( 3) 00:19:06.116 13822.138 - 13881.716: 98.8828% ( 3) 00:19:06.116 13881.716 - 13941.295: 98.8984% ( 2) 00:19:06.116 13941.295 - 14000.873: 98.9297% ( 4) 00:19:06.116 14000.873 - 14060.451: 98.9531% ( 3) 00:19:06.116 14060.451 - 14120.029: 98.9766% ( 3) 00:19:06.116 14120.029 - 14179.607: 98.9922% ( 2) 00:19:06.116 14179.607 - 14239.185: 99.0000% ( 1) 00:19:06.116 30980.655 - 31218.967: 99.0234% ( 3) 00:19:06.116 31218.967 - 31457.280: 99.0703% ( 6) 00:19:06.116 31457.280 - 31695.593: 99.1016% ( 4) 00:19:06.116 31695.593 - 31933.905: 99.1562% ( 7) 00:19:06.116 31933.905 - 32172.218: 99.1953% ( 5) 00:19:06.116 32172.218 - 32410.531: 99.2422% ( 6) 00:19:06.116 32410.531 - 32648.844: 99.2812% ( 5) 00:19:06.116 32648.844 - 32887.156: 99.3203% ( 5) 00:19:06.116 32887.156 - 33125.469: 99.3672% ( 6) 00:19:06.116 33125.469 - 33363.782: 99.4141% ( 6) 00:19:06.116 33363.782 - 33602.095: 99.4531% ( 5) 00:19:06.116 33602.095 - 33840.407: 99.4922% ( 5) 00:19:06.116 33840.407 - 34078.720: 99.5000% ( 1) 00:19:06.116 39321.600 - 39559.913: 99.5312% ( 4) 00:19:06.116 39559.913 - 39798.225: 99.5781% ( 6) 00:19:06.116 39798.225 - 40036.538: 99.6250% ( 6) 00:19:06.116 40036.538 - 40274.851: 99.6641% ( 5) 00:19:06.116 40274.851 - 40513.164: 99.7109% ( 6) 00:19:06.116 40513.164 - 40751.476: 99.7500% ( 5) 00:19:06.116 40751.476 - 40989.789: 99.7812% ( 4) 00:19:06.116 40989.789 - 41228.102: 99.8281% ( 6) 00:19:06.116 41228.102 - 41466.415: 99.8672% ( 5) 00:19:06.116 41466.415 - 41704.727: 99.9219% ( 7) 00:19:06.116 41704.727 - 41943.040: 99.9531% ( 4) 00:19:06.116 41943.040 - 42181.353: 99.9922% ( 5) 00:19:06.116 42181.353 - 42419.665: 100.0000% ( 1) 00:19:06.116 00:19:06.116 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:06.116 ============================================================================== 00:19:06.116 Range in us Cumulative IO count 00:19:06.116 7626.007 - 7685.585: 0.0156% ( 2) 00:19:06.116 7685.585 - 7745.164: 0.0547% ( 5) 00:19:06.116 7745.164 - 7804.742: 0.1250% ( 9) 00:19:06.116 7804.742 - 7864.320: 0.2109% ( 11) 00:19:06.116 7864.320 - 7923.898: 0.3281% ( 15) 00:19:06.116 7923.898 - 7983.476: 0.4453% ( 15) 00:19:06.116 7983.476 - 8043.055: 0.5781% ( 17) 00:19:06.116 8043.055 - 8102.633: 0.7656% ( 24) 00:19:06.116 8102.633 - 8162.211: 0.9922% ( 29) 00:19:06.116 8162.211 - 8221.789: 1.2734% ( 36) 00:19:06.116 8221.789 - 8281.367: 1.5312% ( 33) 00:19:06.116 8281.367 - 8340.945: 1.8594% ( 42) 00:19:06.116 8340.945 - 8400.524: 2.1953% ( 43) 00:19:06.116 8400.524 - 8460.102: 2.5781% ( 49) 00:19:06.116 8460.102 - 8519.680: 3.0234% ( 57) 00:19:06.117 8519.680 - 8579.258: 3.6016% ( 74) 00:19:06.117 8579.258 - 8638.836: 4.2578% ( 84) 00:19:06.117 8638.836 - 8698.415: 5.2734% ( 130) 00:19:06.117 8698.415 - 8757.993: 6.5781% ( 167) 00:19:06.117 8757.993 - 8817.571: 8.4844% ( 244) 00:19:06.117 8817.571 - 8877.149: 10.6719% ( 280) 00:19:06.117 8877.149 - 8936.727: 13.3281% ( 340) 00:19:06.117 8936.727 - 8996.305: 16.1406% ( 360) 00:19:06.117 8996.305 - 9055.884: 19.2266% ( 395) 00:19:06.117 9055.884 - 9115.462: 22.3984% ( 406) 00:19:06.117 9115.462 - 9175.040: 25.7344% ( 427) 00:19:06.117 9175.040 - 9234.618: 29.2266% ( 447) 00:19:06.117 9234.618 - 9294.196: 32.7109% ( 446) 00:19:06.117 9294.196 - 9353.775: 36.2734% ( 456) 00:19:06.117 9353.775 - 9413.353: 39.8281% ( 455) 00:19:06.117 9413.353 - 9472.931: 43.4375% ( 462) 00:19:06.117 9472.931 - 9532.509: 47.1250% ( 472) 00:19:06.117 9532.509 - 9592.087: 50.7266% ( 461) 00:19:06.117 9592.087 - 9651.665: 54.4375% ( 475) 00:19:06.117 9651.665 - 9711.244: 57.9766% ( 453) 00:19:06.117 9711.244 - 9770.822: 61.5000% ( 451) 00:19:06.117 9770.822 - 9830.400: 64.6797% ( 407) 00:19:06.117 9830.400 - 9889.978: 67.5469% ( 367) 00:19:06.117 9889.978 - 9949.556: 69.8984% ( 301) 00:19:06.117 9949.556 - 10009.135: 72.0078% ( 270) 00:19:06.117 10009.135 - 10068.713: 73.7266% ( 220) 00:19:06.117 10068.713 - 10128.291: 75.2578% ( 196) 00:19:06.117 10128.291 - 10187.869: 76.4531% ( 153) 00:19:06.117 10187.869 - 10247.447: 77.5312% ( 138) 00:19:06.117 10247.447 - 10307.025: 78.3125% ( 100) 00:19:06.117 10307.025 - 10366.604: 79.0859% ( 99) 00:19:06.117 10366.604 - 10426.182: 79.7969% ( 91) 00:19:06.117 10426.182 - 10485.760: 80.5078% ( 91) 00:19:06.117 10485.760 - 10545.338: 81.1953% ( 88) 00:19:06.117 10545.338 - 10604.916: 81.8828% ( 88) 00:19:06.117 10604.916 - 10664.495: 82.5234% ( 82) 00:19:06.117 10664.495 - 10724.073: 83.2656% ( 95) 00:19:06.117 10724.073 - 10783.651: 83.9531% ( 88) 00:19:06.117 10783.651 - 10843.229: 84.6641% ( 91) 00:19:06.117 10843.229 - 10902.807: 85.2891% ( 80) 00:19:06.117 10902.807 - 10962.385: 85.9453% ( 84) 00:19:06.117 10962.385 - 11021.964: 86.6094% ( 85) 00:19:06.117 11021.964 - 11081.542: 87.2656% ( 84) 00:19:06.117 11081.542 - 11141.120: 87.8672% ( 77) 00:19:06.117 11141.120 - 11200.698: 88.4375% ( 73) 00:19:06.117 11200.698 - 11260.276: 88.9375% ( 64) 00:19:06.117 11260.276 - 11319.855: 89.4766% ( 69) 00:19:06.117 11319.855 - 11379.433: 90.0391% ( 72) 00:19:06.117 11379.433 - 11439.011: 90.5547% ( 66) 00:19:06.117 11439.011 - 11498.589: 91.1641% ( 78) 00:19:06.117 11498.589 - 11558.167: 91.6406% ( 61) 00:19:06.117 11558.167 - 11617.745: 92.1406% ( 64) 00:19:06.117 11617.745 - 11677.324: 92.6484% ( 65) 00:19:06.117 11677.324 - 11736.902: 93.1328% ( 62) 00:19:06.117 11736.902 - 11796.480: 93.5781% ( 57) 00:19:06.117 11796.480 - 11856.058: 93.9453% ( 47) 00:19:06.117 11856.058 - 11915.636: 94.2891% ( 44) 00:19:06.117 11915.636 - 11975.215: 94.6172% ( 42) 00:19:06.117 11975.215 - 12034.793: 94.9141% ( 38) 00:19:06.117 12034.793 - 12094.371: 95.2031% ( 37) 00:19:06.117 12094.371 - 12153.949: 95.4453% ( 31) 00:19:06.117 12153.949 - 12213.527: 95.6719% ( 29) 00:19:06.117 12213.527 - 12273.105: 95.9531% ( 36) 00:19:06.117 12273.105 - 12332.684: 96.1953% ( 31) 00:19:06.117 12332.684 - 12392.262: 96.4453% ( 32) 00:19:06.117 12392.262 - 12451.840: 96.6719% ( 29) 00:19:06.117 12451.840 - 12511.418: 96.8750% ( 26) 00:19:06.117 12511.418 - 12570.996: 97.0703% ( 25) 00:19:06.117 12570.996 - 12630.575: 97.2344% ( 21) 00:19:06.117 12630.575 - 12690.153: 97.3984% ( 21) 00:19:06.117 12690.153 - 12749.731: 97.5781% ( 23) 00:19:06.117 12749.731 - 12809.309: 97.7344% ( 20) 00:19:06.117 12809.309 - 12868.887: 97.8828% ( 19) 00:19:06.117 12868.887 - 12928.465: 98.0000% ( 15) 00:19:06.117 12928.465 - 12988.044: 98.1016% ( 13) 00:19:06.117 12988.044 - 13047.622: 98.2344% ( 17) 00:19:06.117 13047.622 - 13107.200: 98.3281% ( 12) 00:19:06.117 13107.200 - 13166.778: 98.4062% ( 10) 00:19:06.117 13166.778 - 13226.356: 98.4922% ( 11) 00:19:06.117 13226.356 - 13285.935: 98.5703% ( 10) 00:19:06.117 13285.935 - 13345.513: 98.6328% ( 8) 00:19:06.117 13345.513 - 13405.091: 98.6875% ( 7) 00:19:06.117 13405.091 - 13464.669: 98.7266% ( 5) 00:19:06.117 13464.669 - 13524.247: 98.7578% ( 4) 00:19:06.117 13524.247 - 13583.825: 98.7812% ( 3) 00:19:06.117 13583.825 - 13643.404: 98.8047% ( 3) 00:19:06.117 13643.404 - 13702.982: 98.8281% ( 3) 00:19:06.117 13702.982 - 13762.560: 98.8516% ( 3) 00:19:06.117 13762.560 - 13822.138: 98.8828% ( 4) 00:19:06.117 13822.138 - 13881.716: 98.8984% ( 2) 00:19:06.117 13881.716 - 13941.295: 98.9219% ( 3) 00:19:06.117 13941.295 - 14000.873: 98.9453% ( 3) 00:19:06.117 14000.873 - 14060.451: 98.9688% ( 3) 00:19:06.117 14060.451 - 14120.029: 98.9922% ( 3) 00:19:06.117 14120.029 - 14179.607: 99.0000% ( 1) 00:19:06.117 28954.996 - 29074.153: 99.0156% ( 2) 00:19:06.117 29074.153 - 29193.309: 99.0391% ( 3) 00:19:06.117 29193.309 - 29312.465: 99.0625% ( 3) 00:19:06.117 29312.465 - 29431.622: 99.0859% ( 3) 00:19:06.117 29431.622 - 29550.778: 99.1094% ( 3) 00:19:06.117 29550.778 - 29669.935: 99.1328% ( 3) 00:19:06.117 29669.935 - 29789.091: 99.1562% ( 3) 00:19:06.117 29789.091 - 29908.247: 99.1797% ( 3) 00:19:06.117 29908.247 - 30027.404: 99.1953% ( 2) 00:19:06.117 30027.404 - 30146.560: 99.2188% ( 3) 00:19:06.117 30146.560 - 30265.716: 99.2422% ( 3) 00:19:06.117 30265.716 - 30384.873: 99.2578% ( 2) 00:19:06.117 30384.873 - 30504.029: 99.2812% ( 3) 00:19:06.117 30504.029 - 30742.342: 99.3281% ( 6) 00:19:06.117 30742.342 - 30980.655: 99.3750% ( 6) 00:19:06.117 30980.655 - 31218.967: 99.4219% ( 6) 00:19:06.117 31218.967 - 31457.280: 99.4688% ( 6) 00:19:06.117 31457.280 - 31695.593: 99.5000% ( 4) 00:19:06.117 37176.785 - 37415.098: 99.5312% ( 4) 00:19:06.117 37415.098 - 37653.411: 99.5781% ( 6) 00:19:06.117 37653.411 - 37891.724: 99.6250% ( 6) 00:19:06.117 37891.724 - 38130.036: 99.6641% ( 5) 00:19:06.117 38130.036 - 38368.349: 99.7109% ( 6) 00:19:06.117 38368.349 - 38606.662: 99.7578% ( 6) 00:19:06.117 38606.662 - 38844.975: 99.8047% ( 6) 00:19:06.117 38844.975 - 39083.287: 99.8516% ( 6) 00:19:06.117 39083.287 - 39321.600: 99.8984% ( 6) 00:19:06.117 39321.600 - 39559.913: 99.9375% ( 5) 00:19:06.117 39559.913 - 39798.225: 99.9844% ( 6) 00:19:06.117 39798.225 - 40036.538: 100.0000% ( 2) 00:19:06.117 00:19:06.117 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:06.117 ============================================================================== 00:19:06.117 Range in us Cumulative IO count 00:19:06.117 7685.585 - 7745.164: 0.0312% ( 4) 00:19:06.117 7745.164 - 7804.742: 0.0781% ( 6) 00:19:06.117 7804.742 - 7864.320: 0.1641% ( 11) 00:19:06.117 7864.320 - 7923.898: 0.2734% ( 14) 00:19:06.117 7923.898 - 7983.476: 0.3984% ( 16) 00:19:06.117 7983.476 - 8043.055: 0.5469% ( 19) 00:19:06.117 8043.055 - 8102.633: 0.7422% ( 25) 00:19:06.117 8102.633 - 8162.211: 0.9609% ( 28) 00:19:06.117 8162.211 - 8221.789: 1.2109% ( 32) 00:19:06.117 8221.789 - 8281.367: 1.4844% ( 35) 00:19:06.117 8281.367 - 8340.945: 1.8125% ( 42) 00:19:06.117 8340.945 - 8400.524: 2.1641% ( 45) 00:19:06.117 8400.524 - 8460.102: 2.5547% ( 50) 00:19:06.117 8460.102 - 8519.680: 3.0078% ( 58) 00:19:06.117 8519.680 - 8579.258: 3.5234% ( 66) 00:19:06.117 8579.258 - 8638.836: 4.1797% ( 84) 00:19:06.117 8638.836 - 8698.415: 5.0703% ( 114) 00:19:06.117 8698.415 - 8757.993: 6.4688% ( 179) 00:19:06.117 8757.993 - 8817.571: 8.2578% ( 229) 00:19:06.117 8817.571 - 8877.149: 10.5156% ( 289) 00:19:06.117 8877.149 - 8936.727: 13.0781% ( 328) 00:19:06.117 8936.727 - 8996.305: 15.8906% ( 360) 00:19:06.117 8996.305 - 9055.884: 18.8828% ( 383) 00:19:06.118 9055.884 - 9115.462: 22.1953% ( 424) 00:19:06.118 9115.462 - 9175.040: 25.7266% ( 452) 00:19:06.118 9175.040 - 9234.618: 29.1484% ( 438) 00:19:06.118 9234.618 - 9294.196: 32.7188% ( 457) 00:19:06.118 9294.196 - 9353.775: 36.3359% ( 463) 00:19:06.118 9353.775 - 9413.353: 39.9766% ( 466) 00:19:06.118 9413.353 - 9472.931: 43.5312% ( 455) 00:19:06.118 9472.931 - 9532.509: 47.2344% ( 474) 00:19:06.118 9532.509 - 9592.087: 50.8672% ( 465) 00:19:06.118 9592.087 - 9651.665: 54.5547% ( 472) 00:19:06.118 9651.665 - 9711.244: 58.1094% ( 455) 00:19:06.118 9711.244 - 9770.822: 61.6328% ( 451) 00:19:06.118 9770.822 - 9830.400: 64.8672% ( 414) 00:19:06.118 9830.400 - 9889.978: 67.7344% ( 367) 00:19:06.118 9889.978 - 9949.556: 70.1016% ( 303) 00:19:06.118 9949.556 - 10009.135: 72.3125% ( 283) 00:19:06.118 10009.135 - 10068.713: 74.1797% ( 239) 00:19:06.118 10068.713 - 10128.291: 75.6328% ( 186) 00:19:06.118 10128.291 - 10187.869: 76.8281% ( 153) 00:19:06.118 10187.869 - 10247.447: 77.8203% ( 127) 00:19:06.118 10247.447 - 10307.025: 78.6641% ( 108) 00:19:06.118 10307.025 - 10366.604: 79.5156% ( 109) 00:19:06.118 10366.604 - 10426.182: 80.1562% ( 82) 00:19:06.118 10426.182 - 10485.760: 80.7812% ( 80) 00:19:06.118 10485.760 - 10545.338: 81.4062% ( 80) 00:19:06.118 10545.338 - 10604.916: 82.0234% ( 79) 00:19:06.118 10604.916 - 10664.495: 82.6797% ( 84) 00:19:06.118 10664.495 - 10724.073: 83.3203% ( 82) 00:19:06.118 10724.073 - 10783.651: 83.9922% ( 86) 00:19:06.118 10783.651 - 10843.229: 84.6719% ( 87) 00:19:06.118 10843.229 - 10902.807: 85.3125% ( 82) 00:19:06.118 10902.807 - 10962.385: 85.9531% ( 82) 00:19:06.118 10962.385 - 11021.964: 86.5312% ( 74) 00:19:06.118 11021.964 - 11081.542: 87.1641% ( 81) 00:19:06.118 11081.542 - 11141.120: 87.7812% ( 79) 00:19:06.118 11141.120 - 11200.698: 88.3828% ( 77) 00:19:06.118 11200.698 - 11260.276: 88.8594% ( 61) 00:19:06.118 11260.276 - 11319.855: 89.3828% ( 67) 00:19:06.118 11319.855 - 11379.433: 89.9375% ( 71) 00:19:06.118 11379.433 - 11439.011: 90.4297% ( 63) 00:19:06.118 11439.011 - 11498.589: 90.9766% ( 70) 00:19:06.118 11498.589 - 11558.167: 91.5312% ( 71) 00:19:06.118 11558.167 - 11617.745: 92.1172% ( 75) 00:19:06.118 11617.745 - 11677.324: 92.6641% ( 70) 00:19:06.118 11677.324 - 11736.902: 93.1641% ( 64) 00:19:06.118 11736.902 - 11796.480: 93.6094% ( 57) 00:19:06.118 11796.480 - 11856.058: 93.9844% ( 48) 00:19:06.118 11856.058 - 11915.636: 94.3047% ( 41) 00:19:06.118 11915.636 - 11975.215: 94.5859% ( 36) 00:19:06.118 11975.215 - 12034.793: 94.8359% ( 32) 00:19:06.118 12034.793 - 12094.371: 95.0781% ( 31) 00:19:06.118 12094.371 - 12153.949: 95.3125% ( 30) 00:19:06.118 12153.949 - 12213.527: 95.5547% ( 31) 00:19:06.118 12213.527 - 12273.105: 95.8047% ( 32) 00:19:06.118 12273.105 - 12332.684: 96.0703% ( 34) 00:19:06.118 12332.684 - 12392.262: 96.3516% ( 36) 00:19:06.118 12392.262 - 12451.840: 96.6016% ( 32) 00:19:06.118 12451.840 - 12511.418: 96.8828% ( 36) 00:19:06.118 12511.418 - 12570.996: 97.1250% ( 31) 00:19:06.118 12570.996 - 12630.575: 97.2891% ( 21) 00:19:06.118 12630.575 - 12690.153: 97.4531% ( 21) 00:19:06.118 12690.153 - 12749.731: 97.6094% ( 20) 00:19:06.118 12749.731 - 12809.309: 97.7812% ( 22) 00:19:06.118 12809.309 - 12868.887: 97.9219% ( 18) 00:19:06.118 12868.887 - 12928.465: 98.0469% ( 16) 00:19:06.118 12928.465 - 12988.044: 98.1719% ( 16) 00:19:06.118 12988.044 - 13047.622: 98.2969% ( 16) 00:19:06.118 13047.622 - 13107.200: 98.4062% ( 14) 00:19:06.118 13107.200 - 13166.778: 98.5078% ( 13) 00:19:06.118 13166.778 - 13226.356: 98.6094% ( 13) 00:19:06.118 13226.356 - 13285.935: 98.7188% ( 14) 00:19:06.118 13285.935 - 13345.513: 98.8203% ( 13) 00:19:06.118 13345.513 - 13405.091: 98.9219% ( 13) 00:19:06.118 13405.091 - 13464.669: 98.9609% ( 5) 00:19:06.118 13464.669 - 13524.247: 98.9844% ( 3) 00:19:06.118 13524.247 - 13583.825: 99.0000% ( 2) 00:19:06.118 26691.025 - 26810.182: 99.0156% ( 2) 00:19:06.118 26810.182 - 26929.338: 99.0391% ( 3) 00:19:06.118 26929.338 - 27048.495: 99.0625% ( 3) 00:19:06.118 27048.495 - 27167.651: 99.0781% ( 2) 00:19:06.118 27167.651 - 27286.807: 99.0938% ( 2) 00:19:06.118 27286.807 - 27405.964: 99.1172% ( 3) 00:19:06.118 27405.964 - 27525.120: 99.1406% ( 3) 00:19:06.118 27525.120 - 27644.276: 99.1641% ( 3) 00:19:06.118 27644.276 - 27763.433: 99.1875% ( 3) 00:19:06.118 27763.433 - 27882.589: 99.2109% ( 3) 00:19:06.118 27882.589 - 28001.745: 99.2266% ( 2) 00:19:06.118 28001.745 - 28120.902: 99.2500% ( 3) 00:19:06.118 28120.902 - 28240.058: 99.2734% ( 3) 00:19:06.118 28240.058 - 28359.215: 99.2969% ( 3) 00:19:06.118 28359.215 - 28478.371: 99.3203% ( 3) 00:19:06.118 28478.371 - 28597.527: 99.3438% ( 3) 00:19:06.118 28597.527 - 28716.684: 99.3672% ( 3) 00:19:06.118 28716.684 - 28835.840: 99.3906% ( 3) 00:19:06.118 28835.840 - 28954.996: 99.4141% ( 3) 00:19:06.118 28954.996 - 29074.153: 99.4375% ( 3) 00:19:06.118 29074.153 - 29193.309: 99.4609% ( 3) 00:19:06.118 29193.309 - 29312.465: 99.4766% ( 2) 00:19:06.118 29312.465 - 29431.622: 99.5000% ( 3) 00:19:06.118 35031.971 - 35270.284: 99.5312% ( 4) 00:19:06.118 35270.284 - 35508.596: 99.5781% ( 6) 00:19:06.118 35508.596 - 35746.909: 99.6250% ( 6) 00:19:06.118 35746.909 - 35985.222: 99.6562% ( 4) 00:19:06.118 35985.222 - 36223.535: 99.7031% ( 6) 00:19:06.118 36223.535 - 36461.847: 99.7500% ( 6) 00:19:06.118 36461.847 - 36700.160: 99.7891% ( 5) 00:19:06.118 36700.160 - 36938.473: 99.8359% ( 6) 00:19:06.118 36938.473 - 37176.785: 99.8828% ( 6) 00:19:06.118 37176.785 - 37415.098: 99.9219% ( 5) 00:19:06.118 37415.098 - 37653.411: 99.9688% ( 6) 00:19:06.118 37653.411 - 37891.724: 100.0000% ( 4) 00:19:06.118 00:19:06.118 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:06.118 ============================================================================== 00:19:06.118 Range in us Cumulative IO count 00:19:06.118 7685.585 - 7745.164: 0.0078% ( 1) 00:19:06.118 7745.164 - 7804.742: 0.0547% ( 6) 00:19:06.118 7804.742 - 7864.320: 0.1484% ( 12) 00:19:06.118 7864.320 - 7923.898: 0.2734% ( 16) 00:19:06.118 7923.898 - 7983.476: 0.4141% ( 18) 00:19:06.118 7983.476 - 8043.055: 0.5625% ( 19) 00:19:06.118 8043.055 - 8102.633: 0.7500% ( 24) 00:19:06.118 8102.633 - 8162.211: 0.9453% ( 25) 00:19:06.118 8162.211 - 8221.789: 1.1875% ( 31) 00:19:06.118 8221.789 - 8281.367: 1.4766% ( 37) 00:19:06.118 8281.367 - 8340.945: 1.8203% ( 44) 00:19:06.118 8340.945 - 8400.524: 2.1875% ( 47) 00:19:06.118 8400.524 - 8460.102: 2.5781% ( 50) 00:19:06.118 8460.102 - 8519.680: 3.0469% ( 60) 00:19:06.118 8519.680 - 8579.258: 3.6094% ( 72) 00:19:06.118 8579.258 - 8638.836: 4.2422% ( 81) 00:19:06.118 8638.836 - 8698.415: 5.1406% ( 115) 00:19:06.118 8698.415 - 8757.993: 6.5156% ( 176) 00:19:06.118 8757.993 - 8817.571: 8.3750% ( 238) 00:19:06.118 8817.571 - 8877.149: 10.6172% ( 287) 00:19:06.118 8877.149 - 8936.727: 13.0703% ( 314) 00:19:06.118 8936.727 - 8996.305: 15.9297% ( 366) 00:19:06.118 8996.305 - 9055.884: 18.9375% ( 385) 00:19:06.118 9055.884 - 9115.462: 22.2266% ( 421) 00:19:06.118 9115.462 - 9175.040: 25.6328% ( 436) 00:19:06.118 9175.040 - 9234.618: 29.1484% ( 450) 00:19:06.118 9234.618 - 9294.196: 32.6328% ( 446) 00:19:06.118 9294.196 - 9353.775: 36.2656% ( 465) 00:19:06.118 9353.775 - 9413.353: 39.8047% ( 453) 00:19:06.118 9413.353 - 9472.931: 43.3828% ( 458) 00:19:06.118 9472.931 - 9532.509: 47.0312% ( 467) 00:19:06.118 9532.509 - 9592.087: 50.6875% ( 468) 00:19:06.118 9592.087 - 9651.665: 54.2891% ( 461) 00:19:06.118 9651.665 - 9711.244: 57.9141% ( 464) 00:19:06.118 9711.244 - 9770.822: 61.4844% ( 457) 00:19:06.118 9770.822 - 9830.400: 64.7422% ( 417) 00:19:06.118 9830.400 - 9889.978: 67.5625% ( 361) 00:19:06.118 9889.978 - 9949.556: 69.9375% ( 304) 00:19:06.118 9949.556 - 10009.135: 72.0547% ( 271) 00:19:06.119 10009.135 - 10068.713: 73.9141% ( 238) 00:19:06.119 10068.713 - 10128.291: 75.4531% ( 197) 00:19:06.119 10128.291 - 10187.869: 76.7578% ( 167) 00:19:06.119 10187.869 - 10247.447: 77.7812% ( 131) 00:19:06.119 10247.447 - 10307.025: 78.6094% ( 106) 00:19:06.119 10307.025 - 10366.604: 79.3203% ( 91) 00:19:06.119 10366.604 - 10426.182: 80.0234% ( 90) 00:19:06.119 10426.182 - 10485.760: 80.6875% ( 85) 00:19:06.119 10485.760 - 10545.338: 81.3281% ( 82) 00:19:06.119 10545.338 - 10604.916: 81.9844% ( 84) 00:19:06.119 10604.916 - 10664.495: 82.6797% ( 89) 00:19:06.119 10664.495 - 10724.073: 83.3516% ( 86) 00:19:06.119 10724.073 - 10783.651: 84.0391% ( 88) 00:19:06.119 10783.651 - 10843.229: 84.6875% ( 83) 00:19:06.119 10843.229 - 10902.807: 85.2891% ( 77) 00:19:06.119 10902.807 - 10962.385: 86.0000% ( 91) 00:19:06.119 10962.385 - 11021.964: 86.6484% ( 83) 00:19:06.119 11021.964 - 11081.542: 87.2891% ( 82) 00:19:06.119 11081.542 - 11141.120: 87.9531% ( 85) 00:19:06.119 11141.120 - 11200.698: 88.6016% ( 83) 00:19:06.119 11200.698 - 11260.276: 89.1719% ( 73) 00:19:06.119 11260.276 - 11319.855: 89.7578% ( 75) 00:19:06.119 11319.855 - 11379.433: 90.3281% ( 73) 00:19:06.119 11379.433 - 11439.011: 90.8125% ( 62) 00:19:06.119 11439.011 - 11498.589: 91.3438% ( 68) 00:19:06.119 11498.589 - 11558.167: 91.8594% ( 66) 00:19:06.119 11558.167 - 11617.745: 92.4219% ( 72) 00:19:06.119 11617.745 - 11677.324: 92.9453% ( 67) 00:19:06.119 11677.324 - 11736.902: 93.3906% ( 57) 00:19:06.119 11736.902 - 11796.480: 93.7734% ( 49) 00:19:06.119 11796.480 - 11856.058: 94.1094% ( 43) 00:19:06.119 11856.058 - 11915.636: 94.3984% ( 37) 00:19:06.119 11915.636 - 11975.215: 94.6484% ( 32) 00:19:06.119 11975.215 - 12034.793: 94.9062% ( 33) 00:19:06.119 12034.793 - 12094.371: 95.1406% ( 30) 00:19:06.119 12094.371 - 12153.949: 95.3594% ( 28) 00:19:06.119 12153.949 - 12213.527: 95.6094% ( 32) 00:19:06.119 12213.527 - 12273.105: 95.8828% ( 35) 00:19:06.119 12273.105 - 12332.684: 96.1250% ( 31) 00:19:06.119 12332.684 - 12392.262: 96.3359% ( 27) 00:19:06.119 12392.262 - 12451.840: 96.5391% ( 26) 00:19:06.119 12451.840 - 12511.418: 96.7344% ( 25) 00:19:06.119 12511.418 - 12570.996: 96.9219% ( 24) 00:19:06.119 12570.996 - 12630.575: 97.1172% ( 25) 00:19:06.119 12630.575 - 12690.153: 97.3125% ( 25) 00:19:06.119 12690.153 - 12749.731: 97.4844% ( 22) 00:19:06.119 12749.731 - 12809.309: 97.6406% ( 20) 00:19:06.119 12809.309 - 12868.887: 97.8359% ( 25) 00:19:06.119 12868.887 - 12928.465: 97.9844% ( 19) 00:19:06.119 12928.465 - 12988.044: 98.1719% ( 24) 00:19:06.119 12988.044 - 13047.622: 98.3125% ( 18) 00:19:06.119 13047.622 - 13107.200: 98.4219% ( 14) 00:19:06.119 13107.200 - 13166.778: 98.5312% ( 14) 00:19:06.119 13166.778 - 13226.356: 98.6328% ( 13) 00:19:06.119 13226.356 - 13285.935: 98.7422% ( 14) 00:19:06.119 13285.935 - 13345.513: 98.7891% ( 6) 00:19:06.119 13345.513 - 13405.091: 98.8359% ( 6) 00:19:06.119 13405.091 - 13464.669: 98.8594% ( 3) 00:19:06.119 13464.669 - 13524.247: 98.8828% ( 3) 00:19:06.119 13524.247 - 13583.825: 98.9141% ( 4) 00:19:06.119 13583.825 - 13643.404: 98.9375% ( 3) 00:19:06.119 13643.404 - 13702.982: 98.9688% ( 4) 00:19:06.119 13702.982 - 13762.560: 98.9922% ( 3) 00:19:06.119 13762.560 - 13822.138: 99.0000% ( 1) 00:19:06.119 24307.898 - 24427.055: 99.0078% ( 1) 00:19:06.119 24427.055 - 24546.211: 99.0234% ( 2) 00:19:06.119 24546.211 - 24665.367: 99.0469% ( 3) 00:19:06.119 24665.367 - 24784.524: 99.0703% ( 3) 00:19:06.119 24784.524 - 24903.680: 99.0938% ( 3) 00:19:06.119 24903.680 - 25022.836: 99.1172% ( 3) 00:19:06.119 25022.836 - 25141.993: 99.1406% ( 3) 00:19:06.119 25141.993 - 25261.149: 99.1562% ( 2) 00:19:06.119 25261.149 - 25380.305: 99.1797% ( 3) 00:19:06.119 25380.305 - 25499.462: 99.2031% ( 3) 00:19:06.119 25499.462 - 25618.618: 99.2266% ( 3) 00:19:06.119 25618.618 - 25737.775: 99.2500% ( 3) 00:19:06.119 25737.775 - 25856.931: 99.2734% ( 3) 00:19:06.119 25856.931 - 25976.087: 99.2969% ( 3) 00:19:06.119 25976.087 - 26095.244: 99.3203% ( 3) 00:19:06.119 26095.244 - 26214.400: 99.3438% ( 3) 00:19:06.119 26214.400 - 26333.556: 99.3672% ( 3) 00:19:06.119 26333.556 - 26452.713: 99.3906% ( 3) 00:19:06.119 26452.713 - 26571.869: 99.4062% ( 2) 00:19:06.119 26571.869 - 26691.025: 99.4297% ( 3) 00:19:06.119 26691.025 - 26810.182: 99.4531% ( 3) 00:19:06.119 26810.182 - 26929.338: 99.4766% ( 3) 00:19:06.119 26929.338 - 27048.495: 99.5000% ( 3) 00:19:06.119 32648.844 - 32887.156: 99.5312% ( 4) 00:19:06.119 32887.156 - 33125.469: 99.5781% ( 6) 00:19:06.119 33125.469 - 33363.782: 99.6250% ( 6) 00:19:06.119 33363.782 - 33602.095: 99.6719% ( 6) 00:19:06.119 33602.095 - 33840.407: 99.7188% ( 6) 00:19:06.119 33840.407 - 34078.720: 99.7578% ( 5) 00:19:06.119 34078.720 - 34317.033: 99.8047% ( 6) 00:19:06.119 34317.033 - 34555.345: 99.8516% ( 6) 00:19:06.119 34555.345 - 34793.658: 99.8984% ( 6) 00:19:06.119 34793.658 - 35031.971: 99.9453% ( 6) 00:19:06.119 35031.971 - 35270.284: 99.9922% ( 6) 00:19:06.119 35270.284 - 35508.596: 100.0000% ( 1) 00:19:06.119 00:19:06.119 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:06.119 ============================================================================== 00:19:06.119 Range in us Cumulative IO count 00:19:06.119 7685.585 - 7745.164: 0.0156% ( 2) 00:19:06.119 7745.164 - 7804.742: 0.0703% ( 7) 00:19:06.119 7804.742 - 7864.320: 0.1719% ( 13) 00:19:06.119 7864.320 - 7923.898: 0.2734% ( 13) 00:19:06.119 7923.898 - 7983.476: 0.3984% ( 16) 00:19:06.119 7983.476 - 8043.055: 0.5625% ( 21) 00:19:06.119 8043.055 - 8102.633: 0.7188% ( 20) 00:19:06.119 8102.633 - 8162.211: 0.9141% ( 25) 00:19:06.119 8162.211 - 8221.789: 1.1328% ( 28) 00:19:06.119 8221.789 - 8281.367: 1.4062% ( 35) 00:19:06.119 8281.367 - 8340.945: 1.7188% ( 40) 00:19:06.119 8340.945 - 8400.524: 2.0391% ( 41) 00:19:06.119 8400.524 - 8460.102: 2.3906% ( 45) 00:19:06.119 8460.102 - 8519.680: 2.7344% ( 44) 00:19:06.119 8519.680 - 8579.258: 3.2812% ( 70) 00:19:06.119 8579.258 - 8638.836: 3.9062% ( 80) 00:19:06.119 8638.836 - 8698.415: 4.7188% ( 104) 00:19:06.119 8698.415 - 8757.993: 6.0547% ( 171) 00:19:06.119 8757.993 - 8817.571: 7.9062% ( 237) 00:19:06.119 8817.571 - 8877.149: 10.1562% ( 288) 00:19:06.119 8877.149 - 8936.727: 12.7500% ( 332) 00:19:06.119 8936.727 - 8996.305: 15.6797% ( 375) 00:19:06.119 8996.305 - 9055.884: 18.6094% ( 375) 00:19:06.119 9055.884 - 9115.462: 21.8906% ( 420) 00:19:06.119 9115.462 - 9175.040: 25.3281% ( 440) 00:19:06.119 9175.040 - 9234.618: 28.9062% ( 458) 00:19:06.119 9234.618 - 9294.196: 32.4688% ( 456) 00:19:06.119 9294.196 - 9353.775: 36.1328% ( 469) 00:19:06.119 9353.775 - 9413.353: 39.6875% ( 455) 00:19:06.119 9413.353 - 9472.931: 43.1875% ( 448) 00:19:06.119 9472.931 - 9532.509: 46.9062% ( 476) 00:19:06.119 9532.509 - 9592.087: 50.5312% ( 464) 00:19:06.119 9592.087 - 9651.665: 54.3125% ( 484) 00:19:06.119 9651.665 - 9711.244: 57.9453% ( 465) 00:19:06.119 9711.244 - 9770.822: 61.5078% ( 456) 00:19:06.119 9770.822 - 9830.400: 64.9453% ( 440) 00:19:06.119 9830.400 - 9889.978: 67.8984% ( 378) 00:19:06.119 9889.978 - 9949.556: 70.3125% ( 309) 00:19:06.119 9949.556 - 10009.135: 72.3047% ( 255) 00:19:06.119 10009.135 - 10068.713: 74.0859% ( 228) 00:19:06.119 10068.713 - 10128.291: 75.5547% ( 188) 00:19:06.119 10128.291 - 10187.869: 76.8516% ( 166) 00:19:06.119 10187.869 - 10247.447: 77.8203% ( 124) 00:19:06.119 10247.447 - 10307.025: 78.6406% ( 105) 00:19:06.119 10307.025 - 10366.604: 79.3281% ( 88) 00:19:06.119 10366.604 - 10426.182: 79.9219% ( 76) 00:19:06.119 10426.182 - 10485.760: 80.5391% ( 79) 00:19:06.119 10485.760 - 10545.338: 81.1719% ( 81) 00:19:06.119 10545.338 - 10604.916: 81.8828% ( 91) 00:19:06.119 10604.916 - 10664.495: 82.5156% ( 81) 00:19:06.119 10664.495 - 10724.073: 83.1953% ( 87) 00:19:06.119 10724.073 - 10783.651: 83.7969% ( 77) 00:19:06.120 10783.651 - 10843.229: 84.4062% ( 78) 00:19:06.120 10843.229 - 10902.807: 85.0312% ( 80) 00:19:06.120 10902.807 - 10962.385: 85.7109% ( 87) 00:19:06.120 10962.385 - 11021.964: 86.3828% ( 86) 00:19:06.120 11021.964 - 11081.542: 87.0391% ( 84) 00:19:06.120 11081.542 - 11141.120: 87.7109% ( 86) 00:19:06.120 11141.120 - 11200.698: 88.3672% ( 84) 00:19:06.120 11200.698 - 11260.276: 88.9766% ( 78) 00:19:06.120 11260.276 - 11319.855: 89.5859% ( 78) 00:19:06.120 11319.855 - 11379.433: 90.1797% ( 76) 00:19:06.120 11379.433 - 11439.011: 90.7891% ( 78) 00:19:06.120 11439.011 - 11498.589: 91.3672% ( 74) 00:19:06.120 11498.589 - 11558.167: 91.9297% ( 72) 00:19:06.120 11558.167 - 11617.745: 92.5625% ( 81) 00:19:06.120 11617.745 - 11677.324: 93.0938% ( 68) 00:19:06.120 11677.324 - 11736.902: 93.5859% ( 63) 00:19:06.120 11736.902 - 11796.480: 93.9922% ( 52) 00:19:06.120 11796.480 - 11856.058: 94.4141% ( 54) 00:19:06.120 11856.058 - 11915.636: 94.7656% ( 45) 00:19:06.120 11915.636 - 11975.215: 95.1172% ( 45) 00:19:06.120 11975.215 - 12034.793: 95.4141% ( 38) 00:19:06.120 12034.793 - 12094.371: 95.6797% ( 34) 00:19:06.120 12094.371 - 12153.949: 95.9297% ( 32) 00:19:06.120 12153.949 - 12213.527: 96.1719% ( 31) 00:19:06.120 12213.527 - 12273.105: 96.3672% ( 25) 00:19:06.120 12273.105 - 12332.684: 96.5391% ( 22) 00:19:06.120 12332.684 - 12392.262: 96.6953% ( 20) 00:19:06.120 12392.262 - 12451.840: 96.8516% ( 20) 00:19:06.120 12451.840 - 12511.418: 97.0234% ( 22) 00:19:06.120 12511.418 - 12570.996: 97.1562% ( 17) 00:19:06.120 12570.996 - 12630.575: 97.3438% ( 24) 00:19:06.120 12630.575 - 12690.153: 97.5000% ( 20) 00:19:06.120 12690.153 - 12749.731: 97.6797% ( 23) 00:19:06.120 12749.731 - 12809.309: 97.8359% ( 20) 00:19:06.120 12809.309 - 12868.887: 97.9844% ( 19) 00:19:06.120 12868.887 - 12928.465: 98.1250% ( 18) 00:19:06.120 12928.465 - 12988.044: 98.2578% ( 17) 00:19:06.120 12988.044 - 13047.622: 98.3828% ( 16) 00:19:06.120 13047.622 - 13107.200: 98.4844% ( 13) 00:19:06.120 13107.200 - 13166.778: 98.6094% ( 16) 00:19:06.120 13166.778 - 13226.356: 98.7109% ( 13) 00:19:06.120 13226.356 - 13285.935: 98.7656% ( 7) 00:19:06.120 13285.935 - 13345.513: 98.7891% ( 3) 00:19:06.120 13345.513 - 13405.091: 98.8125% ( 3) 00:19:06.120 13405.091 - 13464.669: 98.8438% ( 4) 00:19:06.120 13464.669 - 13524.247: 98.8672% ( 3) 00:19:06.120 13524.247 - 13583.825: 98.8906% ( 3) 00:19:06.120 13583.825 - 13643.404: 98.9219% ( 4) 00:19:06.120 13643.404 - 13702.982: 98.9453% ( 3) 00:19:06.120 13702.982 - 13762.560: 98.9766% ( 4) 00:19:06.120 13762.560 - 13822.138: 99.0000% ( 3) 00:19:06.120 21924.771 - 22043.927: 99.0156% ( 2) 00:19:06.120 22043.927 - 22163.084: 99.0312% ( 2) 00:19:06.120 22163.084 - 22282.240: 99.0547% ( 3) 00:19:06.120 22282.240 - 22401.396: 99.0781% ( 3) 00:19:06.120 22401.396 - 22520.553: 99.1016% ( 3) 00:19:06.120 22520.553 - 22639.709: 99.1250% ( 3) 00:19:06.120 22639.709 - 22758.865: 99.1484% ( 3) 00:19:06.120 22758.865 - 22878.022: 99.1641% ( 2) 00:19:06.120 22878.022 - 22997.178: 99.1875% ( 3) 00:19:06.120 22997.178 - 23116.335: 99.2109% ( 3) 00:19:06.120 23116.335 - 23235.491: 99.2344% ( 3) 00:19:06.120 23235.491 - 23354.647: 99.2578% ( 3) 00:19:06.120 23354.647 - 23473.804: 99.2812% ( 3) 00:19:06.120 23473.804 - 23592.960: 99.3047% ( 3) 00:19:06.120 23592.960 - 23712.116: 99.3281% ( 3) 00:19:06.120 23712.116 - 23831.273: 99.3438% ( 2) 00:19:06.120 23831.273 - 23950.429: 99.3672% ( 3) 00:19:06.120 23950.429 - 24069.585: 99.3984% ( 4) 00:19:06.120 24069.585 - 24188.742: 99.4141% ( 2) 00:19:06.120 24188.742 - 24307.898: 99.4375% ( 3) 00:19:06.120 24307.898 - 24427.055: 99.4609% ( 3) 00:19:06.120 24427.055 - 24546.211: 99.4844% ( 3) 00:19:06.120 24546.211 - 24665.367: 99.5000% ( 2) 00:19:06.120 30265.716 - 30384.873: 99.5234% ( 3) 00:19:06.120 30384.873 - 30504.029: 99.5469% ( 3) 00:19:06.120 30504.029 - 30742.342: 99.5859% ( 5) 00:19:06.120 30742.342 - 30980.655: 99.6328% ( 6) 00:19:06.120 30980.655 - 31218.967: 99.6719% ( 5) 00:19:06.120 31218.967 - 31457.280: 99.7188% ( 6) 00:19:06.120 31457.280 - 31695.593: 99.7578% ( 5) 00:19:06.120 31695.593 - 31933.905: 99.7969% ( 5) 00:19:06.120 31933.905 - 32172.218: 99.8438% ( 6) 00:19:06.120 32172.218 - 32410.531: 99.8906% ( 6) 00:19:06.120 32410.531 - 32648.844: 99.9453% ( 7) 00:19:06.120 32648.844 - 32887.156: 99.9922% ( 6) 00:19:06.120 32887.156 - 33125.469: 100.0000% ( 1) 00:19:06.120 00:19:06.120 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:06.120 ============================================================================== 00:19:06.120 Range in us Cumulative IO count 00:19:06.120 7685.585 - 7745.164: 0.0312% ( 4) 00:19:06.120 7745.164 - 7804.742: 0.0859% ( 7) 00:19:06.120 7804.742 - 7864.320: 0.1953% ( 14) 00:19:06.120 7864.320 - 7923.898: 0.3125% ( 15) 00:19:06.120 7923.898 - 7983.476: 0.4297% ( 15) 00:19:06.120 7983.476 - 8043.055: 0.5547% ( 16) 00:19:06.120 8043.055 - 8102.633: 0.7031% ( 19) 00:19:06.120 8102.633 - 8162.211: 0.9062% ( 26) 00:19:06.120 8162.211 - 8221.789: 1.1641% ( 33) 00:19:06.120 8221.789 - 8281.367: 1.4609% ( 38) 00:19:06.120 8281.367 - 8340.945: 1.7500% ( 37) 00:19:06.120 8340.945 - 8400.524: 2.0859% ( 43) 00:19:06.120 8400.524 - 8460.102: 2.4062% ( 41) 00:19:06.120 8460.102 - 8519.680: 2.7578% ( 45) 00:19:06.120 8519.680 - 8579.258: 3.2109% ( 58) 00:19:06.120 8579.258 - 8638.836: 3.8203% ( 78) 00:19:06.120 8638.836 - 8698.415: 4.7188% ( 115) 00:19:06.120 8698.415 - 8757.993: 6.0391% ( 169) 00:19:06.120 8757.993 - 8817.571: 7.8594% ( 233) 00:19:06.120 8817.571 - 8877.149: 10.2266% ( 303) 00:19:06.120 8877.149 - 8936.727: 12.8594% ( 337) 00:19:06.120 8936.727 - 8996.305: 15.7891% ( 375) 00:19:06.120 8996.305 - 9055.884: 18.9844% ( 409) 00:19:06.120 9055.884 - 9115.462: 22.3438% ( 430) 00:19:06.120 9115.462 - 9175.040: 25.7188% ( 432) 00:19:06.120 9175.040 - 9234.618: 29.3906% ( 470) 00:19:06.120 9234.618 - 9294.196: 32.9609% ( 457) 00:19:06.120 9294.196 - 9353.775: 36.6328% ( 470) 00:19:06.120 9353.775 - 9413.353: 40.1719% ( 453) 00:19:06.120 9413.353 - 9472.931: 43.7188% ( 454) 00:19:06.120 9472.931 - 9532.509: 47.3516% ( 465) 00:19:06.120 9532.509 - 9592.087: 51.0547% ( 474) 00:19:06.120 9592.087 - 9651.665: 54.6797% ( 464) 00:19:06.120 9651.665 - 9711.244: 58.3516% ( 470) 00:19:06.120 9711.244 - 9770.822: 61.7578% ( 436) 00:19:06.120 9770.822 - 9830.400: 64.9766% ( 412) 00:19:06.120 9830.400 - 9889.978: 67.9062% ( 375) 00:19:06.120 9889.978 - 9949.556: 70.3047% ( 307) 00:19:06.120 9949.556 - 10009.135: 72.3438% ( 261) 00:19:06.120 10009.135 - 10068.713: 73.9141% ( 201) 00:19:06.120 10068.713 - 10128.291: 75.2812% ( 175) 00:19:06.120 10128.291 - 10187.869: 76.3984% ( 143) 00:19:06.120 10187.869 - 10247.447: 77.2578% ( 110) 00:19:06.120 10247.447 - 10307.025: 78.0703% ( 104) 00:19:06.120 10307.025 - 10366.604: 78.7734% ( 90) 00:19:06.120 10366.604 - 10426.182: 79.3828% ( 78) 00:19:06.120 10426.182 - 10485.760: 79.9688% ( 75) 00:19:06.120 10485.760 - 10545.338: 80.5938% ( 80) 00:19:06.120 10545.338 - 10604.916: 81.3281% ( 94) 00:19:06.120 10604.916 - 10664.495: 82.0312% ( 90) 00:19:06.120 10664.495 - 10724.073: 82.7656% ( 94) 00:19:06.120 10724.073 - 10783.651: 83.4375% ( 86) 00:19:06.120 10783.651 - 10843.229: 84.1016% ( 85) 00:19:06.120 10843.229 - 10902.807: 84.7969% ( 89) 00:19:06.120 10902.807 - 10962.385: 85.4609% ( 85) 00:19:06.120 10962.385 - 11021.964: 86.2031% ( 95) 00:19:06.120 11021.964 - 11081.542: 86.9453% ( 95) 00:19:06.121 11081.542 - 11141.120: 87.6719% ( 93) 00:19:06.121 11141.120 - 11200.698: 88.3203% ( 83) 00:19:06.121 11200.698 - 11260.276: 88.9766% ( 84) 00:19:06.121 11260.276 - 11319.855: 89.6641% ( 88) 00:19:06.121 11319.855 - 11379.433: 90.2891% ( 80) 00:19:06.121 11379.433 - 11439.011: 90.8750% ( 75) 00:19:06.121 11439.011 - 11498.589: 91.4141% ( 69) 00:19:06.121 11498.589 - 11558.167: 91.9922% ( 74) 00:19:06.121 11558.167 - 11617.745: 92.5547% ( 72) 00:19:06.121 11617.745 - 11677.324: 93.0859% ( 68) 00:19:06.121 11677.324 - 11736.902: 93.5938% ( 65) 00:19:06.121 11736.902 - 11796.480: 94.0781% ( 62) 00:19:06.121 11796.480 - 11856.058: 94.4141% ( 43) 00:19:06.121 11856.058 - 11915.636: 94.7969% ( 49) 00:19:06.121 11915.636 - 11975.215: 95.0781% ( 36) 00:19:06.121 11975.215 - 12034.793: 95.3672% ( 37) 00:19:06.121 12034.793 - 12094.371: 95.6953% ( 42) 00:19:06.121 12094.371 - 12153.949: 95.9609% ( 34) 00:19:06.121 12153.949 - 12213.527: 96.1641% ( 26) 00:19:06.121 12213.527 - 12273.105: 96.3281% ( 21) 00:19:06.121 12273.105 - 12332.684: 96.5391% ( 27) 00:19:06.121 12332.684 - 12392.262: 96.6953% ( 20) 00:19:06.121 12392.262 - 12451.840: 96.8828% ( 24) 00:19:06.121 12451.840 - 12511.418: 97.0391% ( 20) 00:19:06.121 12511.418 - 12570.996: 97.1875% ( 19) 00:19:06.121 12570.996 - 12630.575: 97.3281% ( 18) 00:19:06.121 12630.575 - 12690.153: 97.4531% ( 16) 00:19:06.121 12690.153 - 12749.731: 97.6016% ( 19) 00:19:06.121 12749.731 - 12809.309: 97.7422% ( 18) 00:19:06.121 12809.309 - 12868.887: 97.8516% ( 14) 00:19:06.121 12868.887 - 12928.465: 98.0156% ( 21) 00:19:06.121 12928.465 - 12988.044: 98.1562% ( 18) 00:19:06.121 12988.044 - 13047.622: 98.2969% ( 18) 00:19:06.121 13047.622 - 13107.200: 98.4531% ( 20) 00:19:06.121 13107.200 - 13166.778: 98.5703% ( 15) 00:19:06.121 13166.778 - 13226.356: 98.6328% ( 8) 00:19:06.121 13226.356 - 13285.935: 98.6875% ( 7) 00:19:06.121 13285.935 - 13345.513: 98.7109% ( 3) 00:19:06.121 13345.513 - 13405.091: 98.7422% ( 4) 00:19:06.121 13405.091 - 13464.669: 98.7656% ( 3) 00:19:06.121 13464.669 - 13524.247: 98.7969% ( 4) 00:19:06.121 13524.247 - 13583.825: 98.8203% ( 3) 00:19:06.121 13583.825 - 13643.404: 98.8438% ( 3) 00:19:06.121 13643.404 - 13702.982: 98.8672% ( 3) 00:19:06.121 13702.982 - 13762.560: 98.8906% ( 3) 00:19:06.121 13762.560 - 13822.138: 98.9141% ( 3) 00:19:06.121 13822.138 - 13881.716: 98.9453% ( 4) 00:19:06.121 13881.716 - 13941.295: 98.9688% ( 3) 00:19:06.121 13941.295 - 14000.873: 99.0000% ( 4) 00:19:06.121 19541.644 - 19660.800: 99.0156% ( 2) 00:19:06.121 19660.800 - 19779.956: 99.0312% ( 2) 00:19:06.121 19779.956 - 19899.113: 99.0625% ( 4) 00:19:06.121 19899.113 - 20018.269: 99.0781% ( 2) 00:19:06.121 20018.269 - 20137.425: 99.1016% ( 3) 00:19:06.121 20137.425 - 20256.582: 99.1250% ( 3) 00:19:06.121 20256.582 - 20375.738: 99.1484% ( 3) 00:19:06.121 20375.738 - 20494.895: 99.1719% ( 3) 00:19:06.121 20494.895 - 20614.051: 99.1953% ( 3) 00:19:06.121 20614.051 - 20733.207: 99.2188% ( 3) 00:19:06.121 20733.207 - 20852.364: 99.2344% ( 2) 00:19:06.121 20852.364 - 20971.520: 99.2578% ( 3) 00:19:06.121 20971.520 - 21090.676: 99.2812% ( 3) 00:19:06.121 21090.676 - 21209.833: 99.3047% ( 3) 00:19:06.121 21209.833 - 21328.989: 99.3281% ( 3) 00:19:06.121 21328.989 - 21448.145: 99.3516% ( 3) 00:19:06.121 21448.145 - 21567.302: 99.3750% ( 3) 00:19:06.121 21567.302 - 21686.458: 99.3984% ( 3) 00:19:06.121 21686.458 - 21805.615: 99.4219% ( 3) 00:19:06.121 21805.615 - 21924.771: 99.4453% ( 3) 00:19:06.121 21924.771 - 22043.927: 99.4688% ( 3) 00:19:06.121 22043.927 - 22163.084: 99.4922% ( 3) 00:19:06.121 22163.084 - 22282.240: 99.5000% ( 1) 00:19:06.121 27763.433 - 27882.589: 99.5156% ( 2) 00:19:06.121 27882.589 - 28001.745: 99.5391% ( 3) 00:19:06.121 28001.745 - 28120.902: 99.5625% ( 3) 00:19:06.121 28120.902 - 28240.058: 99.5938% ( 4) 00:19:06.121 28240.058 - 28359.215: 99.6172% ( 3) 00:19:06.121 28359.215 - 28478.371: 99.6484% ( 4) 00:19:06.121 28478.371 - 28597.527: 99.6719% ( 3) 00:19:06.121 28597.527 - 28716.684: 99.7031% ( 4) 00:19:06.121 28716.684 - 28835.840: 99.7266% ( 3) 00:19:06.121 28835.840 - 28954.996: 99.7500% ( 3) 00:19:06.121 28954.996 - 29074.153: 99.7734% ( 3) 00:19:06.121 29074.153 - 29193.309: 99.7969% ( 3) 00:19:06.121 29193.309 - 29312.465: 99.8281% ( 4) 00:19:06.121 29312.465 - 29431.622: 99.8516% ( 3) 00:19:06.121 29431.622 - 29550.778: 99.8828% ( 4) 00:19:06.121 29550.778 - 29669.935: 99.9062% ( 3) 00:19:06.121 29669.935 - 29789.091: 99.9297% ( 3) 00:19:06.121 29789.091 - 29908.247: 99.9609% ( 4) 00:19:06.121 29908.247 - 30027.404: 99.9844% ( 3) 00:19:06.121 30027.404 - 30146.560: 100.0000% ( 2) 00:19:06.121 00:19:06.121 11:48:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:07.497 Initializing NVMe Controllers 00:19:07.497 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:07.497 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:07.497 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:07.497 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:07.497 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:07.497 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:07.497 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:07.497 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:07.497 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:07.497 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:07.497 Initialization complete. Launching workers. 00:19:07.497 ======================================================== 00:19:07.497 Latency(us) 00:19:07.497 Device Information : IOPS MiB/s Average min max 00:19:07.497 PCIE (0000:00:10.0) NSID 1 from core 0: 10874.35 127.43 11793.99 9125.25 44145.03 00:19:07.497 PCIE (0000:00:11.0) NSID 1 from core 0: 10874.35 127.43 11766.68 9298.90 41518.66 00:19:07.497 PCIE (0000:00:13.0) NSID 1 from core 0: 10874.35 127.43 11739.29 9348.09 39858.30 00:19:07.497 PCIE (0000:00:12.0) NSID 1 from core 0: 10874.35 127.43 11712.32 9290.82 37357.76 00:19:07.497 PCIE (0000:00:12.0) NSID 2 from core 0: 10874.35 127.43 11684.87 9304.08 34735.17 00:19:07.497 PCIE (0000:00:12.0) NSID 3 from core 0: 10874.35 127.43 11657.67 9290.44 32009.21 00:19:07.497 ======================================================== 00:19:07.497 Total : 65246.07 764.60 11725.80 9125.25 44145.03 00:19:07.497 00:19:07.497 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:07.497 ================================================================================= 00:19:07.497 1.00000% : 9592.087us 00:19:07.497 10.00000% : 10187.869us 00:19:07.497 25.00000% : 10604.916us 00:19:07.497 50.00000% : 11141.120us 00:19:07.497 75.00000% : 11856.058us 00:19:07.497 90.00000% : 13166.778us 00:19:07.497 95.00000% : 14954.124us 00:19:07.497 98.00000% : 18707.549us 00:19:07.497 99.00000% : 33363.782us 00:19:07.497 99.50000% : 41943.040us 00:19:07.497 99.90000% : 43849.542us 00:19:07.497 99.99000% : 44326.167us 00:19:07.497 99.99900% : 44326.167us 00:19:07.497 99.99990% : 44326.167us 00:19:07.497 99.99999% : 44326.167us 00:19:07.497 00:19:07.497 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:07.497 ================================================================================= 00:19:07.497 1.00000% : 9770.822us 00:19:07.497 10.00000% : 10247.447us 00:19:07.497 25.00000% : 10664.495us 00:19:07.497 50.00000% : 11141.120us 00:19:07.497 75.00000% : 11796.480us 00:19:07.497 90.00000% : 13226.356us 00:19:07.497 95.00000% : 14775.389us 00:19:07.497 98.00000% : 18588.393us 00:19:07.497 99.00000% : 32172.218us 00:19:07.497 99.50000% : 39559.913us 00:19:07.497 99.90000% : 41228.102us 00:19:07.497 99.99000% : 41704.727us 00:19:07.497 99.99900% : 41704.727us 00:19:07.497 99.99990% : 41704.727us 00:19:07.497 99.99999% : 41704.727us 00:19:07.497 00:19:07.497 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:07.497 ================================================================================= 00:19:07.497 1.00000% : 9711.244us 00:19:07.497 10.00000% : 10307.025us 00:19:07.497 25.00000% : 10664.495us 00:19:07.497 50.00000% : 11141.120us 00:19:07.497 75.00000% : 11736.902us 00:19:07.497 90.00000% : 13166.778us 00:19:07.497 95.00000% : 15073.280us 00:19:07.497 98.00000% : 18707.549us 00:19:07.497 99.00000% : 30146.560us 00:19:07.497 99.50000% : 37891.724us 00:19:07.497 99.90000% : 39559.913us 00:19:07.497 99.99000% : 40036.538us 00:19:07.497 99.99900% : 40036.538us 00:19:07.497 99.99990% : 40036.538us 00:19:07.497 99.99999% : 40036.538us 00:19:07.497 00:19:07.497 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:07.497 ================================================================================= 00:19:07.497 1.00000% : 9770.822us 00:19:07.497 10.00000% : 10307.025us 00:19:07.497 25.00000% : 10664.495us 00:19:07.497 50.00000% : 11141.120us 00:19:07.497 75.00000% : 11736.902us 00:19:07.497 90.00000% : 13166.778us 00:19:07.497 95.00000% : 15252.015us 00:19:07.497 98.00000% : 18826.705us 00:19:07.497 99.00000% : 27644.276us 00:19:07.497 99.50000% : 35508.596us 00:19:07.497 99.90000% : 37176.785us 00:19:07.497 99.99000% : 37415.098us 00:19:07.497 99.99900% : 37415.098us 00:19:07.497 99.99990% : 37415.098us 00:19:07.497 99.99999% : 37415.098us 00:19:07.497 00:19:07.497 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:07.497 ================================================================================= 00:19:07.497 1.00000% : 9770.822us 00:19:07.497 10.00000% : 10307.025us 00:19:07.497 25.00000% : 10664.495us 00:19:07.497 50.00000% : 11141.120us 00:19:07.497 75.00000% : 11736.902us 00:19:07.497 90.00000% : 13047.622us 00:19:07.497 95.00000% : 15609.484us 00:19:07.497 98.00000% : 19065.018us 00:19:07.497 99.00000% : 25141.993us 00:19:07.497 99.50000% : 32887.156us 00:19:07.497 99.90000% : 34555.345us 00:19:07.497 99.99000% : 34793.658us 00:19:07.497 99.99900% : 34793.658us 00:19:07.497 99.99990% : 34793.658us 00:19:07.497 99.99999% : 34793.658us 00:19:07.497 00:19:07.497 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:07.497 ================================================================================= 00:19:07.497 1.00000% : 9770.822us 00:19:07.497 10.00000% : 10307.025us 00:19:07.497 25.00000% : 10664.495us 00:19:07.497 50.00000% : 11141.120us 00:19:07.497 75.00000% : 11796.480us 00:19:07.497 90.00000% : 13166.778us 00:19:07.497 95.00000% : 15609.484us 00:19:07.497 98.00000% : 18707.549us 00:19:07.497 99.00000% : 22758.865us 00:19:07.497 99.50000% : 30146.560us 00:19:07.497 99.90000% : 31695.593us 00:19:07.497 99.99000% : 32172.218us 00:19:07.497 99.99900% : 32172.218us 00:19:07.497 99.99990% : 32172.218us 00:19:07.497 99.99999% : 32172.218us 00:19:07.497 00:19:07.497 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:07.497 ============================================================================== 00:19:07.497 Range in us Cumulative IO count 00:19:07.497 9115.462 - 9175.040: 0.1011% ( 11) 00:19:07.498 9175.040 - 9234.618: 0.1195% ( 2) 00:19:07.498 9234.618 - 9294.196: 0.1379% ( 2) 00:19:07.498 9294.196 - 9353.775: 0.1838% ( 5) 00:19:07.498 9353.775 - 9413.353: 0.3125% ( 14) 00:19:07.498 9413.353 - 9472.931: 0.3952% ( 9) 00:19:07.498 9472.931 - 9532.509: 0.5974% ( 22) 00:19:07.498 9532.509 - 9592.087: 1.1213% ( 57) 00:19:07.498 9592.087 - 9651.665: 1.5901% ( 51) 00:19:07.498 9651.665 - 9711.244: 2.1415% ( 60) 00:19:07.498 9711.244 - 9770.822: 2.8768% ( 80) 00:19:07.498 9770.822 - 9830.400: 3.5662% ( 75) 00:19:07.498 9830.400 - 9889.978: 4.1728% ( 66) 00:19:07.498 9889.978 - 9949.556: 5.0276% ( 93) 00:19:07.498 9949.556 - 10009.135: 6.1305% ( 120) 00:19:07.498 10009.135 - 10068.713: 7.4632% ( 145) 00:19:07.498 10068.713 - 10128.291: 8.7592% ( 141) 00:19:07.498 10128.291 - 10187.869: 10.0460% ( 140) 00:19:07.498 10187.869 - 10247.447: 11.5901% ( 168) 00:19:07.498 10247.447 - 10307.025: 13.4375% ( 201) 00:19:07.498 10307.025 - 10366.604: 15.4412% ( 218) 00:19:07.498 10366.604 - 10426.182: 17.7665% ( 253) 00:19:07.498 10426.182 - 10485.760: 20.2941% ( 275) 00:19:07.498 10485.760 - 10545.338: 22.7574% ( 268) 00:19:07.498 10545.338 - 10604.916: 25.2206% ( 268) 00:19:07.498 10604.916 - 10664.495: 27.7665% ( 277) 00:19:07.498 10664.495 - 10724.073: 30.4688% ( 294) 00:19:07.498 10724.073 - 10783.651: 33.3732% ( 316) 00:19:07.498 10783.651 - 10843.229: 36.2776% ( 316) 00:19:07.498 10843.229 - 10902.807: 39.1544% ( 313) 00:19:07.498 10902.807 - 10962.385: 41.9210% ( 301) 00:19:07.498 10962.385 - 11021.964: 44.8438% ( 318) 00:19:07.498 11021.964 - 11081.542: 47.4908% ( 288) 00:19:07.498 11081.542 - 11141.120: 50.2757% ( 303) 00:19:07.498 11141.120 - 11200.698: 53.0515% ( 302) 00:19:07.498 11200.698 - 11260.276: 55.7169% ( 290) 00:19:07.498 11260.276 - 11319.855: 58.2169% ( 272) 00:19:07.498 11319.855 - 11379.433: 60.6342% ( 263) 00:19:07.498 11379.433 - 11439.011: 62.8217% ( 238) 00:19:07.498 11439.011 - 11498.589: 65.0368% ( 241) 00:19:07.498 11498.589 - 11558.167: 67.1048% ( 225) 00:19:07.498 11558.167 - 11617.745: 69.2004% ( 228) 00:19:07.498 11617.745 - 11677.324: 71.0938% ( 206) 00:19:07.498 11677.324 - 11736.902: 72.8952% ( 196) 00:19:07.498 11736.902 - 11796.480: 74.4761% ( 172) 00:19:07.498 11796.480 - 11856.058: 75.9835% ( 164) 00:19:07.498 11856.058 - 11915.636: 77.2702% ( 140) 00:19:07.498 11915.636 - 11975.215: 78.3456% ( 117) 00:19:07.498 11975.215 - 12034.793: 79.4118% ( 116) 00:19:07.498 12034.793 - 12094.371: 80.3309% ( 100) 00:19:07.498 12094.371 - 12153.949: 81.1121% ( 85) 00:19:07.498 12153.949 - 12213.527: 81.8842% ( 84) 00:19:07.498 12213.527 - 12273.105: 82.5735% ( 75) 00:19:07.498 12273.105 - 12332.684: 83.1801% ( 66) 00:19:07.498 12332.684 - 12392.262: 83.7040% ( 57) 00:19:07.498 12392.262 - 12451.840: 84.1820% ( 52) 00:19:07.498 12451.840 - 12511.418: 84.6967% ( 56) 00:19:07.498 12511.418 - 12570.996: 85.2941% ( 65) 00:19:07.498 12570.996 - 12630.575: 85.8272% ( 58) 00:19:07.498 12630.575 - 12690.153: 86.4246% ( 65) 00:19:07.498 12690.153 - 12749.731: 87.0037% ( 63) 00:19:07.498 12749.731 - 12809.309: 87.5276% ( 57) 00:19:07.498 12809.309 - 12868.887: 88.0147% ( 53) 00:19:07.498 12868.887 - 12928.465: 88.4743% ( 50) 00:19:07.498 12928.465 - 12988.044: 88.9154% ( 48) 00:19:07.498 12988.044 - 13047.622: 89.3566% ( 48) 00:19:07.498 13047.622 - 13107.200: 89.7426% ( 42) 00:19:07.498 13107.200 - 13166.778: 90.0551% ( 34) 00:19:07.498 13166.778 - 13226.356: 90.3860% ( 36) 00:19:07.498 13226.356 - 13285.935: 90.7537% ( 40) 00:19:07.498 13285.935 - 13345.513: 91.0846% ( 36) 00:19:07.498 13345.513 - 13405.091: 91.4246% ( 37) 00:19:07.498 13405.091 - 13464.669: 91.7004% ( 30) 00:19:07.498 13464.669 - 13524.247: 91.9945% ( 32) 00:19:07.498 13524.247 - 13583.825: 92.2426% ( 27) 00:19:07.498 13583.825 - 13643.404: 92.5368% ( 32) 00:19:07.498 13643.404 - 13702.982: 92.7757% ( 26) 00:19:07.498 13702.982 - 13762.560: 93.0147% ( 26) 00:19:07.498 13762.560 - 13822.138: 93.2721% ( 28) 00:19:07.498 13822.138 - 13881.716: 93.5294% ( 28) 00:19:07.498 13881.716 - 13941.295: 93.7592% ( 25) 00:19:07.498 13941.295 - 14000.873: 93.9154% ( 17) 00:19:07.498 14000.873 - 14060.451: 94.0993% ( 20) 00:19:07.498 14060.451 - 14120.029: 94.2096% ( 12) 00:19:07.498 14120.029 - 14179.607: 94.2923% ( 9) 00:19:07.498 14179.607 - 14239.185: 94.3566% ( 7) 00:19:07.498 14239.185 - 14298.764: 94.4118% ( 6) 00:19:07.498 14298.764 - 14358.342: 94.4945% ( 9) 00:19:07.498 14358.342 - 14417.920: 94.5496% ( 6) 00:19:07.498 14417.920 - 14477.498: 94.6324% ( 9) 00:19:07.498 14477.498 - 14537.076: 94.6967% ( 7) 00:19:07.498 14537.076 - 14596.655: 94.7426% ( 5) 00:19:07.498 14596.655 - 14656.233: 94.7886% ( 5) 00:19:07.498 14656.233 - 14715.811: 94.8346% ( 5) 00:19:07.498 14715.811 - 14775.389: 94.8805% ( 5) 00:19:07.498 14775.389 - 14834.967: 94.9265% ( 5) 00:19:07.498 14834.967 - 14894.545: 94.9724% ( 5) 00:19:07.498 14894.545 - 14954.124: 95.0184% ( 5) 00:19:07.498 14954.124 - 15013.702: 95.0368% ( 2) 00:19:07.498 15013.702 - 15073.280: 95.0460% ( 1) 00:19:07.498 15073.280 - 15132.858: 95.0735% ( 3) 00:19:07.498 15132.858 - 15192.436: 95.0827% ( 1) 00:19:07.498 15192.436 - 15252.015: 95.0919% ( 1) 00:19:07.498 15252.015 - 15371.171: 95.1287% ( 4) 00:19:07.498 15371.171 - 15490.327: 95.1562% ( 3) 00:19:07.498 15490.327 - 15609.484: 95.1746% ( 2) 00:19:07.498 15609.484 - 15728.640: 95.2022% ( 3) 00:19:07.498 15728.640 - 15847.796: 95.2390% ( 4) 00:19:07.498 15847.796 - 15966.953: 95.2665% ( 3) 00:19:07.498 15966.953 - 16086.109: 95.2849% ( 2) 00:19:07.498 16086.109 - 16205.265: 95.3033% ( 2) 00:19:07.498 16205.265 - 16324.422: 95.3401% ( 4) 00:19:07.498 16324.422 - 16443.578: 95.5515% ( 23) 00:19:07.498 16443.578 - 16562.735: 95.7169% ( 18) 00:19:07.498 16562.735 - 16681.891: 95.8272% ( 12) 00:19:07.498 16681.891 - 16801.047: 95.9559% ( 14) 00:19:07.498 16801.047 - 16920.204: 96.0846% ( 14) 00:19:07.498 16920.204 - 17039.360: 96.1949% ( 12) 00:19:07.498 17039.360 - 17158.516: 96.3419% ( 16) 00:19:07.498 17158.516 - 17277.673: 96.4614% ( 13) 00:19:07.498 17277.673 - 17396.829: 96.5809% ( 13) 00:19:07.498 17396.829 - 17515.985: 96.7279% ( 16) 00:19:07.498 17515.985 - 17635.142: 96.8474% ( 13) 00:19:07.498 17635.142 - 17754.298: 96.9761% ( 14) 00:19:07.498 17754.298 - 17873.455: 97.1140% ( 15) 00:19:07.498 17873.455 - 17992.611: 97.2610% ( 16) 00:19:07.498 17992.611 - 18111.767: 97.3805% ( 13) 00:19:07.498 18111.767 - 18230.924: 97.5092% ( 14) 00:19:07.498 18230.924 - 18350.080: 97.6379% ( 14) 00:19:07.498 18350.080 - 18469.236: 97.7849% ( 16) 00:19:07.498 18469.236 - 18588.393: 97.9136% ( 14) 00:19:07.498 18588.393 - 18707.549: 98.0147% ( 11) 00:19:07.498 18707.549 - 18826.705: 98.1158% ( 11) 00:19:07.498 18826.705 - 18945.862: 98.2261% ( 12) 00:19:07.498 18945.862 - 19065.018: 98.3180% ( 10) 00:19:07.498 19065.018 - 19184.175: 98.4283% ( 12) 00:19:07.498 19184.175 - 19303.331: 98.4835% ( 6) 00:19:07.498 19303.331 - 19422.487: 98.5662% ( 9) 00:19:07.498 19422.487 - 19541.644: 98.6397% ( 8) 00:19:07.498 19541.644 - 19660.800: 98.6765% ( 4) 00:19:07.498 19660.800 - 19779.956: 98.7224% ( 5) 00:19:07.498 19779.956 - 19899.113: 98.7592% ( 4) 00:19:07.498 19899.113 - 20018.269: 98.8051% ( 5) 00:19:07.498 20018.269 - 20137.425: 98.8235% ( 2) 00:19:07.498 32410.531 - 32648.844: 98.8695% ( 5) 00:19:07.498 32648.844 - 32887.156: 98.9062% ( 4) 00:19:07.498 32887.156 - 33125.469: 98.9522% ( 5) 00:19:07.498 33125.469 - 33363.782: 99.0074% ( 6) 00:19:07.498 33363.782 - 33602.095: 99.0441% ( 4) 00:19:07.498 33602.095 - 33840.407: 99.0901% ( 5) 00:19:07.498 33840.407 - 34078.720: 99.1452% ( 6) 00:19:07.498 34078.720 - 34317.033: 99.2004% ( 6) 00:19:07.498 34317.033 - 34555.345: 99.2463% ( 5) 00:19:07.498 34555.345 - 34793.658: 99.3015% ( 6) 00:19:07.498 34793.658 - 35031.971: 99.3566% ( 6) 00:19:07.498 35031.971 - 35270.284: 99.4118% ( 6) 00:19:07.498 41228.102 - 41466.415: 99.4210% ( 1) 00:19:07.498 41466.415 - 41704.727: 99.4761% ( 6) 00:19:07.498 41704.727 - 41943.040: 99.5221% ( 5) 00:19:07.498 41943.040 - 42181.353: 99.5680% ( 5) 00:19:07.498 42181.353 - 42419.665: 99.6140% ( 5) 00:19:07.498 42419.665 - 42657.978: 99.6599% ( 5) 00:19:07.498 42657.978 - 42896.291: 99.7151% ( 6) 00:19:07.498 42896.291 - 43134.604: 99.7702% ( 6) 00:19:07.498 43134.604 - 43372.916: 99.8254% ( 6) 00:19:07.498 43372.916 - 43611.229: 99.8713% ( 5) 00:19:07.498 43611.229 - 43849.542: 99.9265% ( 6) 00:19:07.498 43849.542 - 44087.855: 99.9816% ( 6) 00:19:07.498 44087.855 - 44326.167: 100.0000% ( 2) 00:19:07.498 00:19:07.498 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:07.498 ============================================================================== 00:19:07.498 Range in us Cumulative IO count 00:19:07.498 9294.196 - 9353.775: 0.0643% ( 7) 00:19:07.498 9353.775 - 9413.353: 0.1103% ( 5) 00:19:07.498 9413.353 - 9472.931: 0.1746% ( 7) 00:19:07.498 9472.931 - 9532.509: 0.2482% ( 8) 00:19:07.498 9532.509 - 9592.087: 0.3309% ( 9) 00:19:07.498 9592.087 - 9651.665: 0.5515% ( 24) 00:19:07.499 9651.665 - 9711.244: 0.8640% ( 34) 00:19:07.499 9711.244 - 9770.822: 1.2408% ( 41) 00:19:07.499 9770.822 - 9830.400: 1.8290% ( 64) 00:19:07.499 9830.400 - 9889.978: 2.4908% ( 72) 00:19:07.499 9889.978 - 9949.556: 3.3732% ( 96) 00:19:07.499 9949.556 - 10009.135: 4.5864% ( 132) 00:19:07.499 10009.135 - 10068.713: 5.7904% ( 131) 00:19:07.499 10068.713 - 10128.291: 7.1324% ( 146) 00:19:07.499 10128.291 - 10187.869: 8.6949% ( 170) 00:19:07.499 10187.869 - 10247.447: 10.2849% ( 173) 00:19:07.499 10247.447 - 10307.025: 12.0404% ( 191) 00:19:07.499 10307.025 - 10366.604: 14.0257% ( 216) 00:19:07.499 10366.604 - 10426.182: 16.0938% ( 225) 00:19:07.499 10426.182 - 10485.760: 18.5110% ( 263) 00:19:07.499 10485.760 - 10545.338: 20.9283% ( 263) 00:19:07.499 10545.338 - 10604.916: 23.7224% ( 304) 00:19:07.499 10604.916 - 10664.495: 26.7371% ( 328) 00:19:07.499 10664.495 - 10724.073: 29.8805% ( 342) 00:19:07.499 10724.073 - 10783.651: 32.9871% ( 338) 00:19:07.499 10783.651 - 10843.229: 36.1029% ( 339) 00:19:07.499 10843.229 - 10902.807: 39.3199% ( 350) 00:19:07.499 10902.807 - 10962.385: 42.4724% ( 343) 00:19:07.499 10962.385 - 11021.964: 45.5974% ( 340) 00:19:07.499 11021.964 - 11081.542: 48.6397% ( 331) 00:19:07.499 11081.542 - 11141.120: 51.6176% ( 324) 00:19:07.499 11141.120 - 11200.698: 54.4761% ( 311) 00:19:07.499 11200.698 - 11260.276: 57.2978% ( 307) 00:19:07.499 11260.276 - 11319.855: 59.9265% ( 286) 00:19:07.499 11319.855 - 11379.433: 62.3529% ( 264) 00:19:07.499 11379.433 - 11439.011: 64.6415% ( 249) 00:19:07.499 11439.011 - 11498.589: 66.8107% ( 236) 00:19:07.499 11498.589 - 11558.167: 68.9798% ( 236) 00:19:07.499 11558.167 - 11617.745: 70.9835% ( 218) 00:19:07.499 11617.745 - 11677.324: 72.7206% ( 189) 00:19:07.499 11677.324 - 11736.902: 74.4577% ( 189) 00:19:07.499 11736.902 - 11796.480: 75.9467% ( 162) 00:19:07.499 11796.480 - 11856.058: 77.2518% ( 142) 00:19:07.499 11856.058 - 11915.636: 78.5662% ( 143) 00:19:07.499 11915.636 - 11975.215: 79.7702% ( 131) 00:19:07.499 11975.215 - 12034.793: 80.7996% ( 112) 00:19:07.499 12034.793 - 12094.371: 81.5993% ( 87) 00:19:07.499 12094.371 - 12153.949: 82.2978% ( 76) 00:19:07.499 12153.949 - 12213.527: 82.9504% ( 71) 00:19:07.499 12213.527 - 12273.105: 83.5386% ( 64) 00:19:07.499 12273.105 - 12332.684: 84.0349% ( 54) 00:19:07.499 12332.684 - 12392.262: 84.4853% ( 49) 00:19:07.499 12392.262 - 12451.840: 84.9816% ( 54) 00:19:07.499 12451.840 - 12511.418: 85.4412% ( 50) 00:19:07.499 12511.418 - 12570.996: 85.8824% ( 48) 00:19:07.499 12570.996 - 12630.575: 86.4246% ( 59) 00:19:07.499 12630.575 - 12690.153: 86.8199% ( 43) 00:19:07.499 12690.153 - 12749.731: 87.3162% ( 54) 00:19:07.499 12749.731 - 12809.309: 87.8401% ( 57) 00:19:07.499 12809.309 - 12868.887: 88.2353% ( 43) 00:19:07.499 12868.887 - 12928.465: 88.6213% ( 42) 00:19:07.499 12928.465 - 12988.044: 88.8971% ( 30) 00:19:07.499 12988.044 - 13047.622: 89.2096% ( 34) 00:19:07.499 13047.622 - 13107.200: 89.5496% ( 37) 00:19:07.499 13107.200 - 13166.778: 89.9081% ( 39) 00:19:07.499 13166.778 - 13226.356: 90.3033% ( 43) 00:19:07.499 13226.356 - 13285.935: 90.6434% ( 37) 00:19:07.499 13285.935 - 13345.513: 90.9375% ( 32) 00:19:07.499 13345.513 - 13405.091: 91.4338% ( 54) 00:19:07.499 13405.091 - 13464.669: 91.8015% ( 40) 00:19:07.499 13464.669 - 13524.247: 92.1048% ( 33) 00:19:07.499 13524.247 - 13583.825: 92.5092% ( 44) 00:19:07.499 13583.825 - 13643.404: 92.7849% ( 30) 00:19:07.499 13643.404 - 13702.982: 93.0423% ( 28) 00:19:07.499 13702.982 - 13762.560: 93.2996% ( 28) 00:19:07.499 13762.560 - 13822.138: 93.4926% ( 21) 00:19:07.499 13822.138 - 13881.716: 93.6949% ( 22) 00:19:07.499 13881.716 - 13941.295: 93.9062% ( 23) 00:19:07.499 13941.295 - 14000.873: 94.0717% ( 18) 00:19:07.499 14000.873 - 14060.451: 94.2371% ( 18) 00:19:07.499 14060.451 - 14120.029: 94.3474% ( 12) 00:19:07.499 14120.029 - 14179.607: 94.4118% ( 7) 00:19:07.499 14179.607 - 14239.185: 94.4945% ( 9) 00:19:07.499 14239.185 - 14298.764: 94.5496% ( 6) 00:19:07.499 14298.764 - 14358.342: 94.6140% ( 7) 00:19:07.499 14358.342 - 14417.920: 94.6967% ( 9) 00:19:07.499 14417.920 - 14477.498: 94.7610% ( 7) 00:19:07.499 14477.498 - 14537.076: 94.8162% ( 6) 00:19:07.499 14537.076 - 14596.655: 94.8805% ( 7) 00:19:07.499 14596.655 - 14656.233: 94.9449% ( 7) 00:19:07.499 14656.233 - 14715.811: 94.9816% ( 4) 00:19:07.499 14715.811 - 14775.389: 95.0184% ( 4) 00:19:07.499 14775.389 - 14834.967: 95.0551% ( 4) 00:19:07.499 14834.967 - 14894.545: 95.0827% ( 3) 00:19:07.499 14894.545 - 14954.124: 95.1379% ( 6) 00:19:07.499 14954.124 - 15013.702: 95.1654% ( 3) 00:19:07.499 15013.702 - 15073.280: 95.1930% ( 3) 00:19:07.499 15073.280 - 15132.858: 95.2022% ( 1) 00:19:07.499 15132.858 - 15192.436: 95.2114% ( 1) 00:19:07.499 15192.436 - 15252.015: 95.2298% ( 2) 00:19:07.499 15252.015 - 15371.171: 95.2574% ( 3) 00:19:07.499 15371.171 - 15490.327: 95.2757% ( 2) 00:19:07.499 15490.327 - 15609.484: 95.2941% ( 2) 00:19:07.499 16324.422 - 16443.578: 95.3125% ( 2) 00:19:07.499 16443.578 - 16562.735: 95.3952% ( 9) 00:19:07.499 16562.735 - 16681.891: 95.4871% ( 10) 00:19:07.499 16681.891 - 16801.047: 95.5515% ( 7) 00:19:07.499 16801.047 - 16920.204: 95.6342% ( 9) 00:19:07.499 16920.204 - 17039.360: 95.7261% ( 10) 00:19:07.499 17039.360 - 17158.516: 95.8364% ( 12) 00:19:07.499 17158.516 - 17277.673: 95.9743% ( 15) 00:19:07.499 17277.673 - 17396.829: 96.1397% ( 18) 00:19:07.499 17396.829 - 17515.985: 96.3695% ( 25) 00:19:07.499 17515.985 - 17635.142: 96.6176% ( 27) 00:19:07.499 17635.142 - 17754.298: 96.8382% ( 24) 00:19:07.499 17754.298 - 17873.455: 97.0772% ( 26) 00:19:07.499 17873.455 - 17992.611: 97.2886% ( 23) 00:19:07.499 17992.611 - 18111.767: 97.4724% ( 20) 00:19:07.499 18111.767 - 18230.924: 97.6195% ( 16) 00:19:07.499 18230.924 - 18350.080: 97.7757% ( 17) 00:19:07.499 18350.080 - 18469.236: 97.9136% ( 15) 00:19:07.499 18469.236 - 18588.393: 98.0239% ( 12) 00:19:07.499 18588.393 - 18707.549: 98.1250% ( 11) 00:19:07.499 18707.549 - 18826.705: 98.2261% ( 11) 00:19:07.499 18826.705 - 18945.862: 98.3272% ( 11) 00:19:07.499 18945.862 - 19065.018: 98.4559% ( 14) 00:19:07.499 19065.018 - 19184.175: 98.5570% ( 11) 00:19:07.499 19184.175 - 19303.331: 98.6305% ( 8) 00:19:07.499 19303.331 - 19422.487: 98.6673% ( 4) 00:19:07.499 19422.487 - 19541.644: 98.7224% ( 6) 00:19:07.499 19541.644 - 19660.800: 98.7684% ( 5) 00:19:07.499 19660.800 - 19779.956: 98.8051% ( 4) 00:19:07.499 19779.956 - 19899.113: 98.8235% ( 2) 00:19:07.499 31218.967 - 31457.280: 98.8787% ( 6) 00:19:07.499 31457.280 - 31695.593: 98.9338% ( 6) 00:19:07.499 31695.593 - 31933.905: 98.9890% ( 6) 00:19:07.499 31933.905 - 32172.218: 99.0533% ( 7) 00:19:07.499 32172.218 - 32410.531: 99.1085% ( 6) 00:19:07.499 32410.531 - 32648.844: 99.1636% ( 6) 00:19:07.499 32648.844 - 32887.156: 99.2188% ( 6) 00:19:07.499 32887.156 - 33125.469: 99.2647% ( 5) 00:19:07.499 33125.469 - 33363.782: 99.3107% ( 5) 00:19:07.499 33363.782 - 33602.095: 99.3750% ( 7) 00:19:07.499 33602.095 - 33840.407: 99.4118% ( 4) 00:19:07.499 39083.287 - 39321.600: 99.4669% ( 6) 00:19:07.499 39321.600 - 39559.913: 99.5312% ( 7) 00:19:07.499 39559.913 - 39798.225: 99.5864% ( 6) 00:19:07.499 39798.225 - 40036.538: 99.6415% ( 6) 00:19:07.499 40036.538 - 40274.851: 99.6967% ( 6) 00:19:07.499 40274.851 - 40513.164: 99.7518% ( 6) 00:19:07.499 40513.164 - 40751.476: 99.8070% ( 6) 00:19:07.499 40751.476 - 40989.789: 99.8713% ( 7) 00:19:07.499 40989.789 - 41228.102: 99.9265% ( 6) 00:19:07.499 41228.102 - 41466.415: 99.9816% ( 6) 00:19:07.499 41466.415 - 41704.727: 100.0000% ( 2) 00:19:07.499 00:19:07.499 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:07.499 ============================================================================== 00:19:07.499 Range in us Cumulative IO count 00:19:07.499 9294.196 - 9353.775: 0.0092% ( 1) 00:19:07.499 9353.775 - 9413.353: 0.0643% ( 6) 00:19:07.499 9413.353 - 9472.931: 0.1103% ( 5) 00:19:07.499 9472.931 - 9532.509: 0.2206% ( 12) 00:19:07.499 9532.509 - 9592.087: 0.4320% ( 23) 00:19:07.499 9592.087 - 9651.665: 0.7077% ( 30) 00:19:07.499 9651.665 - 9711.244: 1.0846% ( 41) 00:19:07.499 9711.244 - 9770.822: 1.5349% ( 49) 00:19:07.499 9770.822 - 9830.400: 2.0404% ( 55) 00:19:07.499 9830.400 - 9889.978: 2.6838% ( 70) 00:19:07.499 9889.978 - 9949.556: 3.5938% ( 99) 00:19:07.499 9949.556 - 10009.135: 4.5496% ( 104) 00:19:07.499 10009.135 - 10068.713: 5.6893% ( 124) 00:19:07.499 10068.713 - 10128.291: 7.0037% ( 143) 00:19:07.499 10128.291 - 10187.869: 8.3272% ( 144) 00:19:07.499 10187.869 - 10247.447: 9.9173% ( 173) 00:19:07.499 10247.447 - 10307.025: 11.4522% ( 167) 00:19:07.499 10307.025 - 10366.604: 13.5202% ( 225) 00:19:07.499 10366.604 - 10426.182: 15.9007% ( 259) 00:19:07.499 10426.182 - 10485.760: 18.4283% ( 275) 00:19:07.499 10485.760 - 10545.338: 21.0110% ( 281) 00:19:07.499 10545.338 - 10604.916: 23.5478% ( 276) 00:19:07.499 10604.916 - 10664.495: 26.3971% ( 310) 00:19:07.499 10664.495 - 10724.073: 29.6875% ( 358) 00:19:07.500 10724.073 - 10783.651: 32.9779% ( 358) 00:19:07.500 10783.651 - 10843.229: 36.2040% ( 351) 00:19:07.500 10843.229 - 10902.807: 39.4669% ( 355) 00:19:07.500 10902.807 - 10962.385: 42.7298% ( 355) 00:19:07.500 10962.385 - 11021.964: 45.9191% ( 347) 00:19:07.500 11021.964 - 11081.542: 49.2831% ( 366) 00:19:07.500 11081.542 - 11141.120: 52.4173% ( 341) 00:19:07.500 11141.120 - 11200.698: 55.3952% ( 324) 00:19:07.500 11200.698 - 11260.276: 58.4191% ( 329) 00:19:07.500 11260.276 - 11319.855: 61.0294% ( 284) 00:19:07.500 11319.855 - 11379.433: 63.6397% ( 284) 00:19:07.500 11379.433 - 11439.011: 66.0662% ( 264) 00:19:07.500 11439.011 - 11498.589: 68.2261% ( 235) 00:19:07.500 11498.589 - 11558.167: 70.1103% ( 205) 00:19:07.500 11558.167 - 11617.745: 71.8290% ( 187) 00:19:07.500 11617.745 - 11677.324: 73.6213% ( 195) 00:19:07.500 11677.324 - 11736.902: 75.1379% ( 165) 00:19:07.500 11736.902 - 11796.480: 76.6728% ( 167) 00:19:07.500 11796.480 - 11856.058: 78.0699% ( 152) 00:19:07.500 11856.058 - 11915.636: 79.3566% ( 140) 00:19:07.500 11915.636 - 11975.215: 80.3952% ( 113) 00:19:07.500 11975.215 - 12034.793: 81.3143% ( 100) 00:19:07.500 12034.793 - 12094.371: 82.1324% ( 89) 00:19:07.500 12094.371 - 12153.949: 82.8309% ( 76) 00:19:07.500 12153.949 - 12213.527: 83.3548% ( 57) 00:19:07.500 12213.527 - 12273.105: 83.7960% ( 48) 00:19:07.500 12273.105 - 12332.684: 84.3199% ( 57) 00:19:07.500 12332.684 - 12392.262: 84.8070% ( 53) 00:19:07.500 12392.262 - 12451.840: 85.2482% ( 48) 00:19:07.500 12451.840 - 12511.418: 85.6250% ( 41) 00:19:07.500 12511.418 - 12570.996: 86.0662% ( 48) 00:19:07.500 12570.996 - 12630.575: 86.5533% ( 53) 00:19:07.500 12630.575 - 12690.153: 86.9393% ( 42) 00:19:07.500 12690.153 - 12749.731: 87.4357% ( 54) 00:19:07.500 12749.731 - 12809.309: 87.9412% ( 55) 00:19:07.500 12809.309 - 12868.887: 88.3915% ( 49) 00:19:07.500 12868.887 - 12928.465: 88.7776% ( 42) 00:19:07.500 12928.465 - 12988.044: 89.1452% ( 40) 00:19:07.500 12988.044 - 13047.622: 89.4853% ( 37) 00:19:07.500 13047.622 - 13107.200: 89.8070% ( 35) 00:19:07.500 13107.200 - 13166.778: 90.0919% ( 31) 00:19:07.500 13166.778 - 13226.356: 90.3585% ( 29) 00:19:07.500 13226.356 - 13285.935: 90.6066% ( 27) 00:19:07.500 13285.935 - 13345.513: 90.8640% ( 28) 00:19:07.500 13345.513 - 13405.091: 91.1489% ( 31) 00:19:07.500 13405.091 - 13464.669: 91.3879% ( 26) 00:19:07.500 13464.669 - 13524.247: 91.7279% ( 37) 00:19:07.500 13524.247 - 13583.825: 92.1140% ( 42) 00:19:07.500 13583.825 - 13643.404: 92.4632% ( 38) 00:19:07.500 13643.404 - 13702.982: 92.8033% ( 37) 00:19:07.500 13702.982 - 13762.560: 93.0974% ( 32) 00:19:07.500 13762.560 - 13822.138: 93.3180% ( 24) 00:19:07.500 13822.138 - 13881.716: 93.4926% ( 19) 00:19:07.500 13881.716 - 13941.295: 93.6765% ( 20) 00:19:07.500 13941.295 - 14000.873: 93.8051% ( 14) 00:19:07.500 14000.873 - 14060.451: 93.9706% ( 18) 00:19:07.500 14060.451 - 14120.029: 94.1085% ( 15) 00:19:07.500 14120.029 - 14179.607: 94.2188% ( 12) 00:19:07.500 14179.607 - 14239.185: 94.3566% ( 15) 00:19:07.500 14239.185 - 14298.764: 94.5772% ( 24) 00:19:07.500 14298.764 - 14358.342: 94.6599% ( 9) 00:19:07.500 14358.342 - 14417.920: 94.7518% ( 10) 00:19:07.500 14417.920 - 14477.498: 94.8254% ( 8) 00:19:07.500 14477.498 - 14537.076: 94.8529% ( 3) 00:19:07.500 14537.076 - 14596.655: 94.8713% ( 2) 00:19:07.500 14596.655 - 14656.233: 94.8989% ( 3) 00:19:07.500 14656.233 - 14715.811: 94.9081% ( 1) 00:19:07.500 14715.811 - 14775.389: 94.9449% ( 4) 00:19:07.500 14775.389 - 14834.967: 94.9540% ( 1) 00:19:07.500 14834.967 - 14894.545: 94.9724% ( 2) 00:19:07.500 14894.545 - 14954.124: 94.9816% ( 1) 00:19:07.500 14954.124 - 15013.702: 94.9908% ( 1) 00:19:07.500 15013.702 - 15073.280: 95.0000% ( 1) 00:19:07.500 15073.280 - 15132.858: 95.0276% ( 3) 00:19:07.500 15132.858 - 15192.436: 95.0551% ( 3) 00:19:07.500 15192.436 - 15252.015: 95.1011% ( 5) 00:19:07.500 15252.015 - 15371.171: 95.1562% ( 6) 00:19:07.500 15371.171 - 15490.327: 95.2022% ( 5) 00:19:07.500 15490.327 - 15609.484: 95.2757% ( 8) 00:19:07.500 15609.484 - 15728.640: 95.2941% ( 2) 00:19:07.500 16562.735 - 16681.891: 95.3309% ( 4) 00:19:07.500 16681.891 - 16801.047: 95.3952% ( 7) 00:19:07.500 16801.047 - 16920.204: 95.5147% ( 13) 00:19:07.500 16920.204 - 17039.360: 95.7353% ( 24) 00:19:07.500 17039.360 - 17158.516: 95.8456% ( 12) 00:19:07.500 17158.516 - 17277.673: 96.0110% ( 18) 00:19:07.500 17277.673 - 17396.829: 96.1857% ( 19) 00:19:07.500 17396.829 - 17515.985: 96.3695% ( 20) 00:19:07.500 17515.985 - 17635.142: 96.5993% ( 25) 00:19:07.500 17635.142 - 17754.298: 96.8290% ( 25) 00:19:07.500 17754.298 - 17873.455: 97.0129% ( 20) 00:19:07.500 17873.455 - 17992.611: 97.2059% ( 21) 00:19:07.500 17992.611 - 18111.767: 97.3529% ( 16) 00:19:07.500 18111.767 - 18230.924: 97.5184% ( 18) 00:19:07.500 18230.924 - 18350.080: 97.6562% ( 15) 00:19:07.500 18350.080 - 18469.236: 97.8033% ( 16) 00:19:07.500 18469.236 - 18588.393: 97.9228% ( 13) 00:19:07.500 18588.393 - 18707.549: 98.0699% ( 16) 00:19:07.500 18707.549 - 18826.705: 98.1801% ( 12) 00:19:07.500 18826.705 - 18945.862: 98.2904% ( 12) 00:19:07.500 18945.862 - 19065.018: 98.4283% ( 15) 00:19:07.500 19065.018 - 19184.175: 98.5202% ( 10) 00:19:07.500 19184.175 - 19303.331: 98.6121% ( 10) 00:19:07.500 19303.331 - 19422.487: 98.6765% ( 7) 00:19:07.500 19422.487 - 19541.644: 98.7224% ( 5) 00:19:07.500 19541.644 - 19660.800: 98.7684% ( 5) 00:19:07.500 19660.800 - 19779.956: 98.8143% ( 5) 00:19:07.500 19779.956 - 19899.113: 98.8235% ( 1) 00:19:07.500 29312.465 - 29431.622: 98.8419% ( 2) 00:19:07.500 29431.622 - 29550.778: 98.8695% ( 3) 00:19:07.500 29550.778 - 29669.935: 98.8971% ( 3) 00:19:07.500 29669.935 - 29789.091: 98.9338% ( 4) 00:19:07.500 29789.091 - 29908.247: 98.9614% ( 3) 00:19:07.500 29908.247 - 30027.404: 98.9890% ( 3) 00:19:07.500 30027.404 - 30146.560: 99.0165% ( 3) 00:19:07.500 30146.560 - 30265.716: 99.0441% ( 3) 00:19:07.500 30265.716 - 30384.873: 99.0717% ( 3) 00:19:07.500 30384.873 - 30504.029: 99.0993% ( 3) 00:19:07.500 30504.029 - 30742.342: 99.1544% ( 6) 00:19:07.500 30742.342 - 30980.655: 99.2096% ( 6) 00:19:07.500 30980.655 - 31218.967: 99.2647% ( 6) 00:19:07.500 31218.967 - 31457.280: 99.3199% ( 6) 00:19:07.500 31457.280 - 31695.593: 99.3842% ( 7) 00:19:07.500 31695.593 - 31933.905: 99.4118% ( 3) 00:19:07.500 37415.098 - 37653.411: 99.4577% ( 5) 00:19:07.500 37653.411 - 37891.724: 99.5221% ( 7) 00:19:07.500 37891.724 - 38130.036: 99.5772% ( 6) 00:19:07.500 38130.036 - 38368.349: 99.6415% ( 7) 00:19:07.500 38368.349 - 38606.662: 99.7059% ( 7) 00:19:07.500 38606.662 - 38844.975: 99.7610% ( 6) 00:19:07.500 38844.975 - 39083.287: 99.7978% ( 4) 00:19:07.500 39083.287 - 39321.600: 99.8529% ( 6) 00:19:07.500 39321.600 - 39559.913: 99.9173% ( 7) 00:19:07.500 39559.913 - 39798.225: 99.9816% ( 7) 00:19:07.500 39798.225 - 40036.538: 100.0000% ( 2) 00:19:07.500 00:19:07.500 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:07.500 ============================================================================== 00:19:07.500 Range in us Cumulative IO count 00:19:07.500 9234.618 - 9294.196: 0.0184% ( 2) 00:19:07.500 9294.196 - 9353.775: 0.0735% ( 6) 00:19:07.500 9353.775 - 9413.353: 0.1195% ( 5) 00:19:07.500 9413.353 - 9472.931: 0.1838% ( 7) 00:19:07.500 9472.931 - 9532.509: 0.2941% ( 12) 00:19:07.500 9532.509 - 9592.087: 0.5239% ( 25) 00:19:07.500 9592.087 - 9651.665: 0.7077% ( 20) 00:19:07.500 9651.665 - 9711.244: 0.9099% ( 22) 00:19:07.500 9711.244 - 9770.822: 1.2316% ( 35) 00:19:07.500 9770.822 - 9830.400: 1.8658% ( 69) 00:19:07.500 9830.400 - 9889.978: 2.5551% ( 75) 00:19:07.500 9889.978 - 9949.556: 3.3732% ( 89) 00:19:07.500 9949.556 - 10009.135: 4.2739% ( 98) 00:19:07.500 10009.135 - 10068.713: 5.2390% ( 105) 00:19:07.500 10068.713 - 10128.291: 6.2776% ( 113) 00:19:07.500 10128.291 - 10187.869: 7.4724% ( 130) 00:19:07.500 10187.869 - 10247.447: 8.9154% ( 157) 00:19:07.500 10247.447 - 10307.025: 10.7445% ( 199) 00:19:07.500 10307.025 - 10366.604: 12.6838% ( 211) 00:19:07.500 10366.604 - 10426.182: 14.7426% ( 224) 00:19:07.500 10426.182 - 10485.760: 17.1232% ( 259) 00:19:07.500 10485.760 - 10545.338: 19.9540% ( 308) 00:19:07.500 10545.338 - 10604.916: 23.0515% ( 337) 00:19:07.500 10604.916 - 10664.495: 26.6452% ( 391) 00:19:07.500 10664.495 - 10724.073: 30.2298% ( 390) 00:19:07.500 10724.073 - 10783.651: 33.8603% ( 395) 00:19:07.500 10783.651 - 10843.229: 37.1783% ( 361) 00:19:07.500 10843.229 - 10902.807: 40.4320% ( 354) 00:19:07.500 10902.807 - 10962.385: 43.4099% ( 324) 00:19:07.500 10962.385 - 11021.964: 46.3235% ( 317) 00:19:07.500 11021.964 - 11081.542: 49.5129% ( 347) 00:19:07.500 11081.542 - 11141.120: 52.7206% ( 349) 00:19:07.500 11141.120 - 11200.698: 55.8824% ( 344) 00:19:07.500 11200.698 - 11260.276: 58.8695% ( 325) 00:19:07.500 11260.276 - 11319.855: 61.4522% ( 281) 00:19:07.500 11319.855 - 11379.433: 64.0165% ( 279) 00:19:07.500 11379.433 - 11439.011: 66.3143% ( 250) 00:19:07.500 11439.011 - 11498.589: 68.5386% ( 242) 00:19:07.500 11498.589 - 11558.167: 70.4963% ( 213) 00:19:07.500 11558.167 - 11617.745: 72.3713% ( 204) 00:19:07.500 11617.745 - 11677.324: 73.9430% ( 171) 00:19:07.500 11677.324 - 11736.902: 75.3401% ( 152) 00:19:07.500 11736.902 - 11796.480: 76.7463% ( 153) 00:19:07.501 11796.480 - 11856.058: 77.9228% ( 128) 00:19:07.501 11856.058 - 11915.636: 78.9706% ( 114) 00:19:07.501 11915.636 - 11975.215: 80.0000% ( 112) 00:19:07.501 11975.215 - 12034.793: 81.0570% ( 115) 00:19:07.501 12034.793 - 12094.371: 81.9485% ( 97) 00:19:07.501 12094.371 - 12153.949: 82.7757% ( 90) 00:19:07.501 12153.949 - 12213.527: 83.4651% ( 75) 00:19:07.501 12213.527 - 12273.105: 84.0349% ( 62) 00:19:07.501 12273.105 - 12332.684: 84.5864% ( 60) 00:19:07.501 12332.684 - 12392.262: 85.0827% ( 54) 00:19:07.501 12392.262 - 12451.840: 85.5882% ( 55) 00:19:07.501 12451.840 - 12511.418: 86.0478% ( 50) 00:19:07.501 12511.418 - 12570.996: 86.4798% ( 47) 00:19:07.501 12570.996 - 12630.575: 86.9118% ( 47) 00:19:07.501 12630.575 - 12690.153: 87.3254% ( 45) 00:19:07.501 12690.153 - 12749.731: 87.7665% ( 48) 00:19:07.501 12749.731 - 12809.309: 88.2629% ( 54) 00:19:07.501 12809.309 - 12868.887: 88.6765% ( 45) 00:19:07.501 12868.887 - 12928.465: 89.0993% ( 46) 00:19:07.501 12928.465 - 12988.044: 89.4026% ( 33) 00:19:07.501 12988.044 - 13047.622: 89.6691% ( 29) 00:19:07.501 13047.622 - 13107.200: 89.9173% ( 27) 00:19:07.501 13107.200 - 13166.778: 90.1287% ( 23) 00:19:07.501 13166.778 - 13226.356: 90.4044% ( 30) 00:19:07.501 13226.356 - 13285.935: 90.7261% ( 35) 00:19:07.501 13285.935 - 13345.513: 91.1581% ( 47) 00:19:07.501 13345.513 - 13405.091: 91.4154% ( 28) 00:19:07.501 13405.091 - 13464.669: 91.6636% ( 27) 00:19:07.501 13464.669 - 13524.247: 91.9393% ( 30) 00:19:07.501 13524.247 - 13583.825: 92.3621% ( 46) 00:19:07.501 13583.825 - 13643.404: 92.6379% ( 30) 00:19:07.501 13643.404 - 13702.982: 92.8401% ( 22) 00:19:07.501 13702.982 - 13762.560: 93.0423% ( 22) 00:19:07.501 13762.560 - 13822.138: 93.2077% ( 18) 00:19:07.501 13822.138 - 13881.716: 93.3456% ( 15) 00:19:07.501 13881.716 - 13941.295: 93.5754% ( 25) 00:19:07.501 13941.295 - 14000.873: 93.7224% ( 16) 00:19:07.501 14000.873 - 14060.451: 93.8603% ( 15) 00:19:07.501 14060.451 - 14120.029: 93.9614% ( 11) 00:19:07.501 14120.029 - 14179.607: 94.0625% ( 11) 00:19:07.501 14179.607 - 14239.185: 94.1544% ( 10) 00:19:07.501 14239.185 - 14298.764: 94.2555% ( 11) 00:19:07.501 14298.764 - 14358.342: 94.3658% ( 12) 00:19:07.501 14358.342 - 14417.920: 94.4669% ( 11) 00:19:07.501 14417.920 - 14477.498: 94.5312% ( 7) 00:19:07.501 14477.498 - 14537.076: 94.6048% ( 8) 00:19:07.501 14537.076 - 14596.655: 94.6599% ( 6) 00:19:07.501 14596.655 - 14656.233: 94.7151% ( 6) 00:19:07.501 14656.233 - 14715.811: 94.7702% ( 6) 00:19:07.501 14715.811 - 14775.389: 94.8070% ( 4) 00:19:07.501 14775.389 - 14834.967: 94.8346% ( 3) 00:19:07.501 14834.967 - 14894.545: 94.8713% ( 4) 00:19:07.501 14894.545 - 14954.124: 94.8989% ( 3) 00:19:07.501 14954.124 - 15013.702: 94.9173% ( 2) 00:19:07.501 15013.702 - 15073.280: 94.9357% ( 2) 00:19:07.501 15073.280 - 15132.858: 94.9632% ( 3) 00:19:07.501 15132.858 - 15192.436: 94.9724% ( 1) 00:19:07.501 15192.436 - 15252.015: 95.0000% ( 3) 00:19:07.501 15252.015 - 15371.171: 95.0276% ( 3) 00:19:07.501 15371.171 - 15490.327: 95.0551% ( 3) 00:19:07.501 15490.327 - 15609.484: 95.0827% ( 3) 00:19:07.501 15609.484 - 15728.640: 95.1011% ( 2) 00:19:07.501 15728.640 - 15847.796: 95.1379% ( 4) 00:19:07.501 15847.796 - 15966.953: 95.1746% ( 4) 00:19:07.501 15966.953 - 16086.109: 95.2114% ( 4) 00:19:07.501 16086.109 - 16205.265: 95.2390% ( 3) 00:19:07.501 16205.265 - 16324.422: 95.2757% ( 4) 00:19:07.501 16324.422 - 16443.578: 95.3676% ( 10) 00:19:07.501 16443.578 - 16562.735: 95.4320% ( 7) 00:19:07.501 16562.735 - 16681.891: 95.5790% ( 16) 00:19:07.501 16681.891 - 16801.047: 95.6158% ( 4) 00:19:07.501 16801.047 - 16920.204: 95.7261% ( 12) 00:19:07.501 16920.204 - 17039.360: 95.8456% ( 13) 00:19:07.501 17039.360 - 17158.516: 95.9743% ( 14) 00:19:07.501 17158.516 - 17277.673: 96.0938% ( 13) 00:19:07.501 17277.673 - 17396.829: 96.2224% ( 14) 00:19:07.501 17396.829 - 17515.985: 96.3971% ( 19) 00:19:07.501 17515.985 - 17635.142: 96.6176% ( 24) 00:19:07.501 17635.142 - 17754.298: 96.8015% ( 20) 00:19:07.501 17754.298 - 17873.455: 97.0312% ( 25) 00:19:07.501 17873.455 - 17992.611: 97.1691% ( 15) 00:19:07.501 17992.611 - 18111.767: 97.2978% ( 14) 00:19:07.501 18111.767 - 18230.924: 97.4081% ( 12) 00:19:07.501 18230.924 - 18350.080: 97.5276% ( 13) 00:19:07.501 18350.080 - 18469.236: 97.6654% ( 15) 00:19:07.501 18469.236 - 18588.393: 97.8033% ( 15) 00:19:07.501 18588.393 - 18707.549: 97.9136% ( 12) 00:19:07.501 18707.549 - 18826.705: 98.0147% ( 11) 00:19:07.501 18826.705 - 18945.862: 98.1066% ( 10) 00:19:07.501 18945.862 - 19065.018: 98.2169% ( 12) 00:19:07.501 19065.018 - 19184.175: 98.3272% ( 12) 00:19:07.501 19184.175 - 19303.331: 98.4191% ( 10) 00:19:07.501 19303.331 - 19422.487: 98.5018% ( 9) 00:19:07.501 19422.487 - 19541.644: 98.5754% ( 8) 00:19:07.501 19541.644 - 19660.800: 98.6581% ( 9) 00:19:07.501 19660.800 - 19779.956: 98.7316% ( 8) 00:19:07.501 19779.956 - 19899.113: 98.7776% ( 5) 00:19:07.501 19899.113 - 20018.269: 98.8051% ( 3) 00:19:07.501 20018.269 - 20137.425: 98.8235% ( 2) 00:19:07.501 26810.182 - 26929.338: 98.8419% ( 2) 00:19:07.501 26929.338 - 27048.495: 98.8695% ( 3) 00:19:07.501 27048.495 - 27167.651: 98.8971% ( 3) 00:19:07.501 27167.651 - 27286.807: 98.9246% ( 3) 00:19:07.501 27286.807 - 27405.964: 98.9522% ( 3) 00:19:07.501 27405.964 - 27525.120: 98.9798% ( 3) 00:19:07.501 27525.120 - 27644.276: 99.0074% ( 3) 00:19:07.501 27644.276 - 27763.433: 99.0257% ( 2) 00:19:07.501 27763.433 - 27882.589: 99.0533% ( 3) 00:19:07.501 27882.589 - 28001.745: 99.0809% ( 3) 00:19:07.501 28001.745 - 28120.902: 99.1176% ( 4) 00:19:07.501 28120.902 - 28240.058: 99.1360% ( 2) 00:19:07.501 28240.058 - 28359.215: 99.1636% ( 3) 00:19:07.501 28359.215 - 28478.371: 99.2004% ( 4) 00:19:07.501 28478.371 - 28597.527: 99.2188% ( 2) 00:19:07.501 28597.527 - 28716.684: 99.2463% ( 3) 00:19:07.501 28716.684 - 28835.840: 99.2831% ( 4) 00:19:07.501 28835.840 - 28954.996: 99.3107% ( 3) 00:19:07.501 28954.996 - 29074.153: 99.3382% ( 3) 00:19:07.501 29074.153 - 29193.309: 99.3658% ( 3) 00:19:07.501 29193.309 - 29312.465: 99.3934% ( 3) 00:19:07.501 29312.465 - 29431.622: 99.4118% ( 2) 00:19:07.501 34793.658 - 35031.971: 99.4210% ( 1) 00:19:07.501 35031.971 - 35270.284: 99.4853% ( 7) 00:19:07.501 35270.284 - 35508.596: 99.5496% ( 7) 00:19:07.501 35508.596 - 35746.909: 99.6048% ( 6) 00:19:07.501 35746.909 - 35985.222: 99.6691% ( 7) 00:19:07.501 35985.222 - 36223.535: 99.7243% ( 6) 00:19:07.501 36223.535 - 36461.847: 99.7794% ( 6) 00:19:07.501 36461.847 - 36700.160: 99.8438% ( 7) 00:19:07.501 36700.160 - 36938.473: 99.8989% ( 6) 00:19:07.501 36938.473 - 37176.785: 99.9449% ( 5) 00:19:07.501 37176.785 - 37415.098: 100.0000% ( 6) 00:19:07.501 00:19:07.501 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:07.501 ============================================================================== 00:19:07.501 Range in us Cumulative IO count 00:19:07.501 9294.196 - 9353.775: 0.0092% ( 1) 00:19:07.501 9353.775 - 9413.353: 0.0184% ( 1) 00:19:07.501 9413.353 - 9472.931: 0.0919% ( 8) 00:19:07.501 9472.931 - 9532.509: 0.1838% ( 10) 00:19:07.501 9532.509 - 9592.087: 0.3033% ( 13) 00:19:07.501 9592.087 - 9651.665: 0.4963% ( 21) 00:19:07.501 9651.665 - 9711.244: 0.7812% ( 31) 00:19:07.501 9711.244 - 9770.822: 1.1765% ( 43) 00:19:07.501 9770.822 - 9830.400: 1.7004% ( 57) 00:19:07.501 9830.400 - 9889.978: 2.3989% ( 76) 00:19:07.501 9889.978 - 9949.556: 3.2169% ( 89) 00:19:07.501 9949.556 - 10009.135: 4.0533% ( 91) 00:19:07.501 10009.135 - 10068.713: 5.1746% ( 122) 00:19:07.501 10068.713 - 10128.291: 6.3419% ( 127) 00:19:07.501 10128.291 - 10187.869: 7.7114% ( 149) 00:19:07.501 10187.869 - 10247.447: 9.3566% ( 179) 00:19:07.501 10247.447 - 10307.025: 11.0018% ( 179) 00:19:07.501 10307.025 - 10366.604: 12.9228% ( 209) 00:19:07.501 10366.604 - 10426.182: 14.9265% ( 218) 00:19:07.501 10426.182 - 10485.760: 17.1048% ( 237) 00:19:07.501 10485.760 - 10545.338: 19.5588% ( 267) 00:19:07.501 10545.338 - 10604.916: 22.7757% ( 350) 00:19:07.501 10604.916 - 10664.495: 26.5441% ( 410) 00:19:07.501 10664.495 - 10724.073: 30.2114% ( 399) 00:19:07.501 10724.073 - 10783.651: 33.8143% ( 392) 00:19:07.501 10783.651 - 10843.229: 37.1507% ( 363) 00:19:07.501 10843.229 - 10902.807: 40.2941% ( 342) 00:19:07.501 10902.807 - 10962.385: 43.4467% ( 343) 00:19:07.501 10962.385 - 11021.964: 46.2408% ( 304) 00:19:07.501 11021.964 - 11081.542: 49.2831% ( 331) 00:19:07.501 11081.542 - 11141.120: 52.3989% ( 339) 00:19:07.501 11141.120 - 11200.698: 55.4136% ( 328) 00:19:07.501 11200.698 - 11260.276: 58.2812% ( 312) 00:19:07.501 11260.276 - 11319.855: 60.9926% ( 295) 00:19:07.501 11319.855 - 11379.433: 63.5294% ( 276) 00:19:07.501 11379.433 - 11439.011: 65.9099% ( 259) 00:19:07.501 11439.011 - 11498.589: 68.1526% ( 244) 00:19:07.501 11498.589 - 11558.167: 70.2941% ( 233) 00:19:07.501 11558.167 - 11617.745: 72.2059% ( 208) 00:19:07.501 11617.745 - 11677.324: 73.7224% ( 165) 00:19:07.501 11677.324 - 11736.902: 75.1379% ( 154) 00:19:07.501 11736.902 - 11796.480: 76.4614% ( 144) 00:19:07.501 11796.480 - 11856.058: 77.7390% ( 139) 00:19:07.501 11856.058 - 11915.636: 78.8603% ( 122) 00:19:07.501 11915.636 - 11975.215: 79.8254% ( 105) 00:19:07.501 11975.215 - 12034.793: 80.8180% ( 108) 00:19:07.501 12034.793 - 12094.371: 81.6452% ( 90) 00:19:07.501 12094.371 - 12153.949: 82.4632% ( 89) 00:19:07.502 12153.949 - 12213.527: 83.2721% ( 88) 00:19:07.502 12213.527 - 12273.105: 83.9706% ( 76) 00:19:07.502 12273.105 - 12332.684: 84.6599% ( 75) 00:19:07.502 12332.684 - 12392.262: 85.2665% ( 66) 00:19:07.502 12392.262 - 12451.840: 85.7996% ( 58) 00:19:07.502 12451.840 - 12511.418: 86.2776% ( 52) 00:19:07.502 12511.418 - 12570.996: 86.7188% ( 48) 00:19:07.502 12570.996 - 12630.575: 87.1507% ( 47) 00:19:07.502 12630.575 - 12690.153: 87.5276% ( 41) 00:19:07.502 12690.153 - 12749.731: 87.8585% ( 36) 00:19:07.502 12749.731 - 12809.309: 88.2077% ( 38) 00:19:07.502 12809.309 - 12868.887: 88.7592% ( 60) 00:19:07.502 12868.887 - 12928.465: 89.1544% ( 43) 00:19:07.502 12928.465 - 12988.044: 89.5864% ( 47) 00:19:07.502 12988.044 - 13047.622: 90.0092% ( 46) 00:19:07.502 13047.622 - 13107.200: 90.4412% ( 47) 00:19:07.502 13107.200 - 13166.778: 90.8364% ( 43) 00:19:07.502 13166.778 - 13226.356: 91.1765% ( 37) 00:19:07.502 13226.356 - 13285.935: 91.4614% ( 31) 00:19:07.502 13285.935 - 13345.513: 91.7371% ( 30) 00:19:07.502 13345.513 - 13405.091: 92.0312% ( 32) 00:19:07.502 13405.091 - 13464.669: 92.3162% ( 31) 00:19:07.502 13464.669 - 13524.247: 92.5276% ( 23) 00:19:07.502 13524.247 - 13583.825: 92.7298% ( 22) 00:19:07.502 13583.825 - 13643.404: 92.9320% ( 22) 00:19:07.502 13643.404 - 13702.982: 93.1250% ( 21) 00:19:07.502 13702.982 - 13762.560: 93.2904% ( 18) 00:19:07.502 13762.560 - 13822.138: 93.3915% ( 11) 00:19:07.502 13822.138 - 13881.716: 93.5294% ( 15) 00:19:07.502 13881.716 - 13941.295: 93.7040% ( 19) 00:19:07.502 13941.295 - 14000.873: 93.9154% ( 23) 00:19:07.502 14000.873 - 14060.451: 94.0901% ( 19) 00:19:07.502 14060.451 - 14120.029: 94.1912% ( 11) 00:19:07.502 14120.029 - 14179.607: 94.2923% ( 11) 00:19:07.502 14179.607 - 14239.185: 94.3566% ( 7) 00:19:07.502 14239.185 - 14298.764: 94.3934% ( 4) 00:19:07.502 14298.764 - 14358.342: 94.4393% ( 5) 00:19:07.502 14358.342 - 14417.920: 94.4853% ( 5) 00:19:07.502 14417.920 - 14477.498: 94.5312% ( 5) 00:19:07.502 14477.498 - 14537.076: 94.5680% ( 4) 00:19:07.502 14537.076 - 14596.655: 94.5956% ( 3) 00:19:07.502 14596.655 - 14656.233: 94.6324% ( 4) 00:19:07.502 14656.233 - 14715.811: 94.6599% ( 3) 00:19:07.502 14715.811 - 14775.389: 94.6783% ( 2) 00:19:07.502 14775.389 - 14834.967: 94.6875% ( 1) 00:19:07.502 14834.967 - 14894.545: 94.7059% ( 2) 00:19:07.502 15073.280 - 15132.858: 94.7151% ( 1) 00:19:07.502 15132.858 - 15192.436: 94.7426% ( 3) 00:19:07.502 15192.436 - 15252.015: 94.7518% ( 1) 00:19:07.502 15252.015 - 15371.171: 94.8070% ( 6) 00:19:07.502 15371.171 - 15490.327: 94.8529% ( 5) 00:19:07.502 15490.327 - 15609.484: 95.0000% ( 16) 00:19:07.502 15609.484 - 15728.640: 95.1011% ( 11) 00:19:07.502 15728.640 - 15847.796: 95.1562% ( 6) 00:19:07.502 15847.796 - 15966.953: 95.2022% ( 5) 00:19:07.502 15966.953 - 16086.109: 95.2574% ( 6) 00:19:07.502 16086.109 - 16205.265: 95.3217% ( 7) 00:19:07.502 16205.265 - 16324.422: 95.3952% ( 8) 00:19:07.502 16324.422 - 16443.578: 95.4504% ( 6) 00:19:07.502 16443.578 - 16562.735: 95.5147% ( 7) 00:19:07.502 16562.735 - 16681.891: 95.5699% ( 6) 00:19:07.502 16681.891 - 16801.047: 95.6710% ( 11) 00:19:07.502 16801.047 - 16920.204: 95.7904% ( 13) 00:19:07.502 16920.204 - 17039.360: 95.9099% ( 13) 00:19:07.502 17039.360 - 17158.516: 96.0294% ( 13) 00:19:07.502 17158.516 - 17277.673: 96.1213% ( 10) 00:19:07.502 17277.673 - 17396.829: 96.2684% ( 16) 00:19:07.502 17396.829 - 17515.985: 96.4338% ( 18) 00:19:07.502 17515.985 - 17635.142: 96.7096% ( 30) 00:19:07.502 17635.142 - 17754.298: 96.8382% ( 14) 00:19:07.502 17754.298 - 17873.455: 96.9761% ( 15) 00:19:07.502 17873.455 - 17992.611: 97.1048% ( 14) 00:19:07.502 17992.611 - 18111.767: 97.2794% ( 19) 00:19:07.502 18111.767 - 18230.924: 97.3805% ( 11) 00:19:07.502 18230.924 - 18350.080: 97.4632% ( 9) 00:19:07.502 18350.080 - 18469.236: 97.5735% ( 12) 00:19:07.502 18469.236 - 18588.393: 97.6654% ( 10) 00:19:07.502 18588.393 - 18707.549: 97.7482% ( 9) 00:19:07.502 18707.549 - 18826.705: 97.8493% ( 11) 00:19:07.502 18826.705 - 18945.862: 97.9504% ( 11) 00:19:07.502 18945.862 - 19065.018: 98.0699% ( 13) 00:19:07.502 19065.018 - 19184.175: 98.1893% ( 13) 00:19:07.502 19184.175 - 19303.331: 98.2629% ( 8) 00:19:07.502 19303.331 - 19422.487: 98.3364% ( 8) 00:19:07.502 19422.487 - 19541.644: 98.4191% ( 9) 00:19:07.502 19541.644 - 19660.800: 98.4743% ( 6) 00:19:07.502 19660.800 - 19779.956: 98.5478% ( 8) 00:19:07.502 19779.956 - 19899.113: 98.5938% ( 5) 00:19:07.502 19899.113 - 20018.269: 98.6305% ( 4) 00:19:07.502 20018.269 - 20137.425: 98.6581% ( 3) 00:19:07.502 20137.425 - 20256.582: 98.6765% ( 2) 00:19:07.502 20256.582 - 20375.738: 98.7040% ( 3) 00:19:07.502 20375.738 - 20494.895: 98.7316% ( 3) 00:19:07.502 20494.895 - 20614.051: 98.7684% ( 4) 00:19:07.502 20614.051 - 20733.207: 98.8051% ( 4) 00:19:07.502 20733.207 - 20852.364: 98.8235% ( 2) 00:19:07.502 24307.898 - 24427.055: 98.8511% ( 3) 00:19:07.502 24427.055 - 24546.211: 98.8695% ( 2) 00:19:07.502 24546.211 - 24665.367: 98.8971% ( 3) 00:19:07.502 24665.367 - 24784.524: 98.9246% ( 3) 00:19:07.502 24784.524 - 24903.680: 98.9522% ( 3) 00:19:07.502 24903.680 - 25022.836: 98.9890% ( 4) 00:19:07.502 25022.836 - 25141.993: 99.0074% ( 2) 00:19:07.502 25141.993 - 25261.149: 99.0349% ( 3) 00:19:07.502 25261.149 - 25380.305: 99.0625% ( 3) 00:19:07.502 25380.305 - 25499.462: 99.0901% ( 3) 00:19:07.502 25499.462 - 25618.618: 99.1268% ( 4) 00:19:07.502 25618.618 - 25737.775: 99.1544% ( 3) 00:19:07.502 25737.775 - 25856.931: 99.1728% ( 2) 00:19:07.502 25856.931 - 25976.087: 99.2096% ( 4) 00:19:07.502 25976.087 - 26095.244: 99.2371% ( 3) 00:19:07.502 26095.244 - 26214.400: 99.2647% ( 3) 00:19:07.502 26214.400 - 26333.556: 99.2923% ( 3) 00:19:07.502 26333.556 - 26452.713: 99.3199% ( 3) 00:19:07.502 26452.713 - 26571.869: 99.3474% ( 3) 00:19:07.502 26571.869 - 26691.025: 99.3658% ( 2) 00:19:07.502 26691.025 - 26810.182: 99.3934% ( 3) 00:19:07.502 26810.182 - 26929.338: 99.4118% ( 2) 00:19:07.502 32172.218 - 32410.531: 99.4210% ( 1) 00:19:07.502 32410.531 - 32648.844: 99.4761% ( 6) 00:19:07.502 32648.844 - 32887.156: 99.5404% ( 7) 00:19:07.502 32887.156 - 33125.469: 99.5864% ( 5) 00:19:07.502 33125.469 - 33363.782: 99.6507% ( 7) 00:19:07.502 33363.782 - 33602.095: 99.7059% ( 6) 00:19:07.502 33602.095 - 33840.407: 99.7610% ( 6) 00:19:07.502 33840.407 - 34078.720: 99.8254% ( 7) 00:19:07.502 34078.720 - 34317.033: 99.8897% ( 7) 00:19:07.502 34317.033 - 34555.345: 99.9449% ( 6) 00:19:07.502 34555.345 - 34793.658: 100.0000% ( 6) 00:19:07.502 00:19:07.502 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:07.502 ============================================================================== 00:19:07.502 Range in us Cumulative IO count 00:19:07.502 9234.618 - 9294.196: 0.0184% ( 2) 00:19:07.502 9294.196 - 9353.775: 0.0643% ( 5) 00:19:07.502 9353.775 - 9413.353: 0.1287% ( 7) 00:19:07.502 9413.353 - 9472.931: 0.1746% ( 5) 00:19:07.502 9472.931 - 9532.509: 0.2390% ( 7) 00:19:07.502 9532.509 - 9592.087: 0.3125% ( 8) 00:19:07.502 9592.087 - 9651.665: 0.4779% ( 18) 00:19:07.502 9651.665 - 9711.244: 0.7996% ( 35) 00:19:07.502 9711.244 - 9770.822: 1.2868% ( 53) 00:19:07.502 9770.822 - 9830.400: 2.0037% ( 78) 00:19:07.502 9830.400 - 9889.978: 2.7665% ( 83) 00:19:07.502 9889.978 - 9949.556: 3.6489% ( 96) 00:19:07.503 9949.556 - 10009.135: 4.5680% ( 100) 00:19:07.503 10009.135 - 10068.713: 5.4596% ( 97) 00:19:07.503 10068.713 - 10128.291: 6.4982% ( 113) 00:19:07.503 10128.291 - 10187.869: 7.8125% ( 143) 00:19:07.503 10187.869 - 10247.447: 9.3382% ( 166) 00:19:07.503 10247.447 - 10307.025: 11.0386% ( 185) 00:19:07.503 10307.025 - 10366.604: 12.9044% ( 203) 00:19:07.503 10366.604 - 10426.182: 14.9357% ( 221) 00:19:07.503 10426.182 - 10485.760: 17.3346% ( 261) 00:19:07.503 10485.760 - 10545.338: 20.0460% ( 295) 00:19:07.503 10545.338 - 10604.916: 23.1250% ( 335) 00:19:07.503 10604.916 - 10664.495: 26.3511% ( 351) 00:19:07.503 10664.495 - 10724.073: 29.7151% ( 366) 00:19:07.503 10724.073 - 10783.651: 32.9871% ( 356) 00:19:07.503 10783.651 - 10843.229: 36.3327% ( 364) 00:19:07.503 10843.229 - 10902.807: 39.4853% ( 343) 00:19:07.503 10902.807 - 10962.385: 42.7206% ( 352) 00:19:07.503 10962.385 - 11021.964: 45.9191% ( 348) 00:19:07.503 11021.964 - 11081.542: 48.9062% ( 325) 00:19:07.503 11081.542 - 11141.120: 52.0680% ( 344) 00:19:07.503 11141.120 - 11200.698: 54.8805% ( 306) 00:19:07.503 11200.698 - 11260.276: 57.7206% ( 309) 00:19:07.503 11260.276 - 11319.855: 60.2757% ( 278) 00:19:07.503 11319.855 - 11379.433: 62.9044% ( 286) 00:19:07.503 11379.433 - 11439.011: 65.3217% ( 263) 00:19:07.503 11439.011 - 11498.589: 67.6103% ( 249) 00:19:07.503 11498.589 - 11558.167: 69.6415% ( 221) 00:19:07.503 11558.167 - 11617.745: 71.5257% ( 205) 00:19:07.503 11617.745 - 11677.324: 73.1158% ( 173) 00:19:07.503 11677.324 - 11736.902: 74.4577% ( 146) 00:19:07.503 11736.902 - 11796.480: 75.8180% ( 148) 00:19:07.503 11796.480 - 11856.058: 76.9026% ( 118) 00:19:07.503 11856.058 - 11915.636: 77.9044% ( 109) 00:19:07.503 11915.636 - 11975.215: 78.9338% ( 112) 00:19:07.503 11975.215 - 12034.793: 79.9632% ( 112) 00:19:07.503 12034.793 - 12094.371: 80.9191% ( 104) 00:19:07.503 12094.371 - 12153.949: 81.7647% ( 92) 00:19:07.503 12153.949 - 12213.527: 82.5092% ( 81) 00:19:07.503 12213.527 - 12273.105: 83.1801% ( 73) 00:19:07.503 12273.105 - 12332.684: 83.7776% ( 65) 00:19:07.503 12332.684 - 12392.262: 84.3199% ( 59) 00:19:07.503 12392.262 - 12451.840: 84.8989% ( 63) 00:19:07.503 12451.840 - 12511.418: 85.4871% ( 64) 00:19:07.503 12511.418 - 12570.996: 85.9559% ( 51) 00:19:07.503 12570.996 - 12630.575: 86.3787% ( 46) 00:19:07.503 12630.575 - 12690.153: 86.8015% ( 46) 00:19:07.503 12690.153 - 12749.731: 87.2610% ( 50) 00:19:07.503 12749.731 - 12809.309: 87.7665% ( 55) 00:19:07.503 12809.309 - 12868.887: 88.3272% ( 61) 00:19:07.503 12868.887 - 12928.465: 88.7408% ( 45) 00:19:07.503 12928.465 - 12988.044: 89.2096% ( 51) 00:19:07.503 12988.044 - 13047.622: 89.5956% ( 42) 00:19:07.503 13047.622 - 13107.200: 89.8989% ( 33) 00:19:07.503 13107.200 - 13166.778: 90.1746% ( 30) 00:19:07.503 13166.778 - 13226.356: 90.5239% ( 38) 00:19:07.503 13226.356 - 13285.935: 90.9007% ( 41) 00:19:07.503 13285.935 - 13345.513: 91.3879% ( 53) 00:19:07.503 13345.513 - 13405.091: 91.8382% ( 49) 00:19:07.503 13405.091 - 13464.669: 92.1875% ( 38) 00:19:07.503 13464.669 - 13524.247: 92.5735% ( 42) 00:19:07.503 13524.247 - 13583.825: 92.8585% ( 31) 00:19:07.503 13583.825 - 13643.404: 93.1066% ( 27) 00:19:07.503 13643.404 - 13702.982: 93.4007% ( 32) 00:19:07.503 13702.982 - 13762.560: 93.6213% ( 24) 00:19:07.503 13762.560 - 13822.138: 93.7960% ( 19) 00:19:07.503 13822.138 - 13881.716: 93.9062% ( 12) 00:19:07.503 13881.716 - 13941.295: 94.0074% ( 11) 00:19:07.503 13941.295 - 14000.873: 94.1176% ( 12) 00:19:07.503 14000.873 - 14060.451: 94.1820% ( 7) 00:19:07.503 14060.451 - 14120.029: 94.2463% ( 7) 00:19:07.503 14120.029 - 14179.607: 94.3015% ( 6) 00:19:07.503 14179.607 - 14239.185: 94.3474% ( 5) 00:19:07.503 14239.185 - 14298.764: 94.4026% ( 6) 00:19:07.503 14298.764 - 14358.342: 94.4485% ( 5) 00:19:07.503 14358.342 - 14417.920: 94.5037% ( 6) 00:19:07.503 14417.920 - 14477.498: 94.5404% ( 4) 00:19:07.503 14477.498 - 14537.076: 94.5680% ( 3) 00:19:07.503 14537.076 - 14596.655: 94.5864% ( 2) 00:19:07.503 14596.655 - 14656.233: 94.6140% ( 3) 00:19:07.503 14656.233 - 14715.811: 94.6324% ( 2) 00:19:07.503 14715.811 - 14775.389: 94.6507% ( 2) 00:19:07.503 14775.389 - 14834.967: 94.6783% ( 3) 00:19:07.503 14834.967 - 14894.545: 94.6875% ( 1) 00:19:07.503 14894.545 - 14954.124: 94.7151% ( 3) 00:19:07.503 14954.124 - 15013.702: 94.8162% ( 11) 00:19:07.503 15013.702 - 15073.280: 94.8805% ( 7) 00:19:07.503 15073.280 - 15132.858: 94.8897% ( 1) 00:19:07.503 15132.858 - 15192.436: 94.9081% ( 2) 00:19:07.503 15192.436 - 15252.015: 94.9265% ( 2) 00:19:07.503 15252.015 - 15371.171: 94.9632% ( 4) 00:19:07.503 15371.171 - 15490.327: 94.9908% ( 3) 00:19:07.503 15490.327 - 15609.484: 95.0184% ( 3) 00:19:07.503 15609.484 - 15728.640: 95.0551% ( 4) 00:19:07.503 15728.640 - 15847.796: 95.1379% ( 9) 00:19:07.503 15847.796 - 15966.953: 95.2022% ( 7) 00:19:07.503 15966.953 - 16086.109: 95.2849% ( 9) 00:19:07.503 16086.109 - 16205.265: 95.3676% ( 9) 00:19:07.503 16205.265 - 16324.422: 95.4044% ( 4) 00:19:07.503 16324.422 - 16443.578: 95.4779% ( 8) 00:19:07.503 16443.578 - 16562.735: 95.5423% ( 7) 00:19:07.503 16562.735 - 16681.891: 95.6066% ( 7) 00:19:07.503 16681.891 - 16801.047: 95.6801% ( 8) 00:19:07.503 16801.047 - 16920.204: 95.8180% ( 15) 00:19:07.503 16920.204 - 17039.360: 95.9743% ( 17) 00:19:07.503 17039.360 - 17158.516: 96.1489% ( 19) 00:19:07.503 17158.516 - 17277.673: 96.3787% ( 25) 00:19:07.503 17277.673 - 17396.829: 96.6268% ( 27) 00:19:07.503 17396.829 - 17515.985: 96.8290% ( 22) 00:19:07.503 17515.985 - 17635.142: 97.0221% ( 21) 00:19:07.503 17635.142 - 17754.298: 97.2059% ( 20) 00:19:07.503 17754.298 - 17873.455: 97.3346% ( 14) 00:19:07.503 17873.455 - 17992.611: 97.4632% ( 14) 00:19:07.503 17992.611 - 18111.767: 97.5735% ( 12) 00:19:07.503 18111.767 - 18230.924: 97.6838% ( 12) 00:19:07.503 18230.924 - 18350.080: 97.7757% ( 10) 00:19:07.503 18350.080 - 18469.236: 97.8768% ( 11) 00:19:07.503 18469.236 - 18588.393: 97.9963% ( 13) 00:19:07.503 18588.393 - 18707.549: 98.0882% ( 10) 00:19:07.503 18707.549 - 18826.705: 98.2169% ( 14) 00:19:07.503 18826.705 - 18945.862: 98.3456% ( 14) 00:19:07.503 18945.862 - 19065.018: 98.4559% ( 12) 00:19:07.503 19065.018 - 19184.175: 98.5754% ( 13) 00:19:07.503 19184.175 - 19303.331: 98.6581% ( 9) 00:19:07.503 19303.331 - 19422.487: 98.7040% ( 5) 00:19:07.503 19422.487 - 19541.644: 98.7500% ( 5) 00:19:07.503 19541.644 - 19660.800: 98.7960% ( 5) 00:19:07.503 19660.800 - 19779.956: 98.8235% ( 3) 00:19:07.503 22401.396 - 22520.553: 98.8327% ( 1) 00:19:07.503 22520.553 - 22639.709: 98.9062% ( 8) 00:19:07.503 22639.709 - 22758.865: 99.0165% ( 12) 00:19:07.503 22758.865 - 22878.022: 99.0901% ( 8) 00:19:07.503 22878.022 - 22997.178: 99.1085% ( 2) 00:19:07.503 22997.178 - 23116.335: 99.1360% ( 3) 00:19:07.503 23116.335 - 23235.491: 99.1544% ( 2) 00:19:07.503 23235.491 - 23354.647: 99.1820% ( 3) 00:19:07.503 23354.647 - 23473.804: 99.2004% ( 2) 00:19:07.503 23473.804 - 23592.960: 99.2279% ( 3) 00:19:07.503 23592.960 - 23712.116: 99.2463% ( 2) 00:19:07.503 23712.116 - 23831.273: 99.2739% ( 3) 00:19:07.503 23831.273 - 23950.429: 99.3015% ( 3) 00:19:07.503 23950.429 - 24069.585: 99.3199% ( 2) 00:19:07.503 24069.585 - 24188.742: 99.3382% ( 2) 00:19:07.503 24188.742 - 24307.898: 99.3658% ( 3) 00:19:07.503 24307.898 - 24427.055: 99.3934% ( 3) 00:19:07.503 24427.055 - 24546.211: 99.4118% ( 2) 00:19:07.503 28120.902 - 28240.058: 99.4301% ( 2) 00:19:07.503 28240.058 - 28359.215: 99.4393% ( 1) 00:19:07.503 29789.091 - 29908.247: 99.4485% ( 1) 00:19:07.503 29908.247 - 30027.404: 99.4761% ( 3) 00:19:07.503 30027.404 - 30146.560: 99.5037% ( 3) 00:19:07.503 30146.560 - 30265.716: 99.5312% ( 3) 00:19:07.503 30265.716 - 30384.873: 99.5680% ( 4) 00:19:07.503 30384.873 - 30504.029: 99.5956% ( 3) 00:19:07.503 30504.029 - 30742.342: 99.6599% ( 7) 00:19:07.503 30742.342 - 30980.655: 99.7243% ( 7) 00:19:07.503 30980.655 - 31218.967: 99.7886% ( 7) 00:19:07.503 31218.967 - 31457.280: 99.8438% ( 6) 00:19:07.503 31457.280 - 31695.593: 99.9081% ( 7) 00:19:07.503 31695.593 - 31933.905: 99.9724% ( 7) 00:19:07.503 31933.905 - 32172.218: 100.0000% ( 3) 00:19:07.503 00:19:07.503 11:48:04 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:07.503 00:19:07.503 real 0m2.738s 00:19:07.503 user 0m2.325s 00:19:07.503 sys 0m0.279s 00:19:07.503 11:48:04 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:07.503 11:48:04 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:07.503 ************************************ 00:19:07.503 END TEST nvme_perf 00:19:07.503 ************************************ 00:19:07.503 11:48:04 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:07.503 11:48:04 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:07.503 11:48:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.503 11:48:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.503 ************************************ 00:19:07.503 START TEST nvme_hello_world 00:19:07.503 ************************************ 00:19:07.503 11:48:04 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:07.762 Initializing NVMe Controllers 00:19:07.762 Attached to 0000:00:10.0 00:19:07.762 Namespace ID: 1 size: 6GB 00:19:07.762 Attached to 0000:00:11.0 00:19:07.762 Namespace ID: 1 size: 5GB 00:19:07.762 Attached to 0000:00:13.0 00:19:07.762 Namespace ID: 1 size: 1GB 00:19:07.762 Attached to 0000:00:12.0 00:19:07.762 Namespace ID: 1 size: 4GB 00:19:07.762 Namespace ID: 2 size: 4GB 00:19:07.762 Namespace ID: 3 size: 4GB 00:19:07.762 Initialization complete. 00:19:07.762 INFO: using host memory buffer for IO 00:19:07.762 Hello world! 00:19:07.762 INFO: using host memory buffer for IO 00:19:07.762 Hello world! 00:19:07.762 INFO: using host memory buffer for IO 00:19:07.762 Hello world! 00:19:07.762 INFO: using host memory buffer for IO 00:19:07.762 Hello world! 00:19:07.762 INFO: using host memory buffer for IO 00:19:07.762 Hello world! 00:19:07.762 INFO: using host memory buffer for IO 00:19:07.762 Hello world! 00:19:07.762 00:19:07.762 real 0m0.310s 00:19:07.762 user 0m0.132s 00:19:07.762 sys 0m0.122s 00:19:07.762 11:48:04 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:07.762 11:48:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:07.762 ************************************ 00:19:07.762 END TEST nvme_hello_world 00:19:07.762 ************************************ 00:19:07.762 11:48:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:07.762 11:48:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:07.762 11:48:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.762 11:48:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.020 ************************************ 00:19:08.020 START TEST nvme_sgl 00:19:08.020 ************************************ 00:19:08.020 11:48:04 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:08.020 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:08.020 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:08.278 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:08.278 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:08.278 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:08.278 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:08.278 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:19:08.278 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:19:08.278 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:19:08.278 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:19:08.278 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:19:08.278 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:19:08.278 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:19:08.278 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:19:08.278 NVMe Readv/Writev Request test 00:19:08.278 Attached to 0000:00:10.0 00:19:08.278 Attached to 0000:00:11.0 00:19:08.278 Attached to 0000:00:13.0 00:19:08.278 Attached to 0000:00:12.0 00:19:08.278 0000:00:10.0: build_io_request_2 test passed 00:19:08.278 0000:00:10.0: build_io_request_4 test passed 00:19:08.278 0000:00:10.0: build_io_request_5 test passed 00:19:08.278 0000:00:10.0: build_io_request_6 test passed 00:19:08.278 0000:00:10.0: build_io_request_7 test passed 00:19:08.278 0000:00:10.0: build_io_request_10 test passed 00:19:08.278 0000:00:11.0: build_io_request_2 test passed 00:19:08.278 0000:00:11.0: build_io_request_4 test passed 00:19:08.278 0000:00:11.0: build_io_request_5 test passed 00:19:08.278 0000:00:11.0: build_io_request_6 test passed 00:19:08.278 0000:00:11.0: build_io_request_7 test passed 00:19:08.278 0000:00:11.0: build_io_request_10 test passed 00:19:08.278 Cleaning up... 00:19:08.278 00:19:08.278 real 0m0.376s 00:19:08.278 user 0m0.195s 00:19:08.278 sys 0m0.139s 00:19:08.278 11:48:05 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.278 11:48:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:08.278 ************************************ 00:19:08.278 END TEST nvme_sgl 00:19:08.278 ************************************ 00:19:08.278 11:48:05 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:08.278 11:48:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:08.278 11:48:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.278 11:48:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.278 ************************************ 00:19:08.278 START TEST nvme_e2edp 00:19:08.278 ************************************ 00:19:08.278 11:48:05 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:08.537 NVMe Write/Read with End-to-End data protection test 00:19:08.537 Attached to 0000:00:10.0 00:19:08.537 Attached to 0000:00:11.0 00:19:08.537 Attached to 0000:00:13.0 00:19:08.537 Attached to 0000:00:12.0 00:19:08.537 Cleaning up... 00:19:08.537 00:19:08.537 real 0m0.297s 00:19:08.537 user 0m0.111s 00:19:08.537 sys 0m0.140s 00:19:08.537 11:48:05 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.537 11:48:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:08.537 ************************************ 00:19:08.537 END TEST nvme_e2edp 00:19:08.537 ************************************ 00:19:08.537 11:48:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:08.537 11:48:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:08.537 11:48:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.537 11:48:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.537 ************************************ 00:19:08.537 START TEST nvme_reserve 00:19:08.537 ************************************ 00:19:08.537 11:48:05 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:09.132 ===================================================== 00:19:09.132 NVMe Controller at PCI bus 0, device 16, function 0 00:19:09.132 ===================================================== 00:19:09.132 Reservations: Not Supported 00:19:09.132 ===================================================== 00:19:09.132 NVMe Controller at PCI bus 0, device 17, function 0 00:19:09.132 ===================================================== 00:19:09.132 Reservations: Not Supported 00:19:09.132 ===================================================== 00:19:09.132 NVMe Controller at PCI bus 0, device 19, function 0 00:19:09.132 ===================================================== 00:19:09.132 Reservations: Not Supported 00:19:09.132 ===================================================== 00:19:09.132 NVMe Controller at PCI bus 0, device 18, function 0 00:19:09.132 ===================================================== 00:19:09.132 Reservations: Not Supported 00:19:09.132 Reservation test passed 00:19:09.132 00:19:09.132 real 0m0.285s 00:19:09.132 user 0m0.109s 00:19:09.132 sys 0m0.133s 00:19:09.132 11:48:05 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.132 11:48:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:09.132 ************************************ 00:19:09.132 END TEST nvme_reserve 00:19:09.132 ************************************ 00:19:09.132 11:48:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:09.132 11:48:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:09.132 11:48:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.133 11:48:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.133 ************************************ 00:19:09.133 START TEST nvme_err_injection 00:19:09.133 ************************************ 00:19:09.133 11:48:05 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:09.391 NVMe Error Injection test 00:19:09.391 Attached to 0000:00:10.0 00:19:09.391 Attached to 0000:00:11.0 00:19:09.391 Attached to 0000:00:13.0 00:19:09.391 Attached to 0000:00:12.0 00:19:09.391 0000:00:10.0: get features failed as expected 00:19:09.391 0000:00:11.0: get features failed as expected 00:19:09.391 0000:00:13.0: get features failed as expected 00:19:09.391 0000:00:12.0: get features failed as expected 00:19:09.391 0000:00:10.0: get features successfully as expected 00:19:09.391 0000:00:11.0: get features successfully as expected 00:19:09.391 0000:00:13.0: get features successfully as expected 00:19:09.391 0000:00:12.0: get features successfully as expected 00:19:09.391 0000:00:10.0: read failed as expected 00:19:09.391 0000:00:11.0: read failed as expected 00:19:09.391 0000:00:13.0: read failed as expected 00:19:09.391 0000:00:12.0: read failed as expected 00:19:09.391 0000:00:10.0: read successfully as expected 00:19:09.391 0000:00:11.0: read successfully as expected 00:19:09.391 0000:00:13.0: read successfully as expected 00:19:09.391 0000:00:12.0: read successfully as expected 00:19:09.391 Cleaning up... 00:19:09.391 00:19:09.391 real 0m0.303s 00:19:09.391 user 0m0.115s 00:19:09.391 sys 0m0.139s 00:19:09.391 11:48:06 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.391 11:48:06 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:09.391 ************************************ 00:19:09.391 END TEST nvme_err_injection 00:19:09.391 ************************************ 00:19:09.391 11:48:06 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:09.391 11:48:06 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:19:09.391 11:48:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.391 11:48:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.391 ************************************ 00:19:09.391 START TEST nvme_overhead 00:19:09.391 ************************************ 00:19:09.391 11:48:06 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:10.766 Initializing NVMe Controllers 00:19:10.766 Attached to 0000:00:10.0 00:19:10.766 Attached to 0000:00:11.0 00:19:10.766 Attached to 0000:00:13.0 00:19:10.766 Attached to 0000:00:12.0 00:19:10.766 Initialization complete. Launching workers. 00:19:10.766 submit (in ns) avg, min, max = 16155.3, 13415.0, 82393.2 00:19:10.766 complete (in ns) avg, min, max = 10947.0, 9104.5, 84625.0 00:19:10.766 00:19:10.766 Submit histogram 00:19:10.766 ================ 00:19:10.766 Range in us Cumulative Count 00:19:10.766 13.382 - 13.440: 0.0087% ( 1) 00:19:10.766 13.440 - 13.498: 0.0175% ( 1) 00:19:10.766 13.498 - 13.556: 0.0262% ( 1) 00:19:10.766 13.556 - 13.615: 0.0437% ( 2) 00:19:10.766 13.615 - 13.673: 0.1136% ( 8) 00:19:10.766 13.673 - 13.731: 0.1573% ( 5) 00:19:10.766 13.731 - 13.789: 0.2184% ( 7) 00:19:10.766 13.789 - 13.847: 0.2534% ( 4) 00:19:10.766 13.847 - 13.905: 0.3670% ( 13) 00:19:10.766 13.905 - 13.964: 0.4281% ( 7) 00:19:10.766 13.964 - 14.022: 0.5679% ( 16) 00:19:10.766 14.022 - 14.080: 0.9087% ( 39) 00:19:10.766 14.080 - 14.138: 1.3194% ( 47) 00:19:10.766 14.138 - 14.196: 1.7300% ( 47) 00:19:10.766 14.196 - 14.255: 2.1844% ( 52) 00:19:10.766 14.255 - 14.313: 2.5251% ( 39) 00:19:10.766 14.313 - 14.371: 2.9969% ( 54) 00:19:10.766 14.371 - 14.429: 3.7047% ( 81) 00:19:10.766 14.429 - 14.487: 5.4958% ( 205) 00:19:10.766 14.487 - 14.545: 9.0869% ( 411) 00:19:10.766 14.545 - 14.604: 15.7361% ( 761) 00:19:10.766 14.604 - 14.662: 24.8493% ( 1043) 00:19:10.767 14.662 - 14.720: 35.0983% ( 1173) 00:19:10.767 14.720 - 14.778: 44.3163% ( 1055) 00:19:10.767 14.778 - 14.836: 50.7296% ( 734) 00:19:10.767 14.836 - 14.895: 55.2468% ( 517) 00:19:10.767 14.895 - 15.011: 60.2971% ( 578) 00:19:10.767 15.011 - 15.127: 62.9008% ( 298) 00:19:10.767 15.127 - 15.244: 64.8493% ( 223) 00:19:10.767 15.244 - 15.360: 66.8589% ( 230) 00:19:10.767 15.360 - 15.476: 68.5190% ( 190) 00:19:10.767 15.476 - 15.593: 69.8471% ( 152) 00:19:10.767 15.593 - 15.709: 70.7296% ( 101) 00:19:10.767 15.709 - 15.825: 71.3237% ( 68) 00:19:10.767 15.825 - 15.942: 71.6470% ( 37) 00:19:10.767 15.942 - 16.058: 71.8305% ( 21) 00:19:10.767 16.058 - 16.175: 71.9703% ( 16) 00:19:10.767 16.175 - 16.291: 72.0751% ( 12) 00:19:10.767 16.291 - 16.407: 72.2586% ( 21) 00:19:10.767 16.407 - 16.524: 72.4771% ( 25) 00:19:10.767 16.524 - 16.640: 72.6256% ( 17) 00:19:10.767 16.640 - 16.756: 72.7304% ( 12) 00:19:10.767 16.756 - 16.873: 72.8440% ( 13) 00:19:10.767 16.873 - 16.989: 72.9227% ( 9) 00:19:10.767 16.989 - 17.105: 72.9838% ( 7) 00:19:10.767 17.105 - 17.222: 73.2197% ( 27) 00:19:10.767 17.222 - 17.338: 74.6876% ( 168) 00:19:10.767 17.338 - 17.455: 78.5758% ( 445) 00:19:10.767 17.455 - 17.571: 83.0144% ( 508) 00:19:10.767 17.571 - 17.687: 86.3871% ( 386) 00:19:10.767 17.687 - 17.804: 87.8550% ( 168) 00:19:10.767 17.804 - 17.920: 88.7112% ( 98) 00:19:10.767 17.920 - 18.036: 89.2442% ( 61) 00:19:10.767 18.036 - 18.153: 89.7859% ( 62) 00:19:10.767 18.153 - 18.269: 90.4150% ( 72) 00:19:10.767 18.269 - 18.385: 90.9742% ( 64) 00:19:10.767 18.385 - 18.502: 91.4548% ( 55) 00:19:10.767 18.502 - 18.618: 91.6907% ( 27) 00:19:10.767 18.618 - 18.735: 91.8480% ( 18) 00:19:10.767 18.735 - 18.851: 91.9441% ( 11) 00:19:10.767 18.851 - 18.967: 92.0577% ( 13) 00:19:10.767 18.967 - 19.084: 92.1276% ( 8) 00:19:10.767 19.084 - 19.200: 92.1887% ( 7) 00:19:10.767 19.200 - 19.316: 92.2586% ( 8) 00:19:10.767 19.316 - 19.433: 92.3285% ( 8) 00:19:10.767 19.433 - 19.549: 92.3897% ( 7) 00:19:10.767 19.549 - 19.665: 92.4596% ( 8) 00:19:10.767 19.665 - 19.782: 92.5557% ( 11) 00:19:10.767 19.782 - 19.898: 92.6256% ( 8) 00:19:10.767 19.898 - 20.015: 92.7217% ( 11) 00:19:10.767 20.015 - 20.131: 92.8440% ( 14) 00:19:10.767 20.131 - 20.247: 92.9052% ( 7) 00:19:10.767 20.247 - 20.364: 92.9751% ( 8) 00:19:10.767 20.364 - 20.480: 93.0537% ( 9) 00:19:10.767 20.480 - 20.596: 93.1498% ( 11) 00:19:10.767 20.596 - 20.713: 93.2372% ( 10) 00:19:10.767 20.713 - 20.829: 93.3421% ( 12) 00:19:10.767 20.829 - 20.945: 93.4294% ( 10) 00:19:10.767 20.945 - 21.062: 93.5780% ( 17) 00:19:10.767 21.062 - 21.178: 93.7615% ( 21) 00:19:10.767 21.178 - 21.295: 93.8576% ( 11) 00:19:10.767 21.295 - 21.411: 94.0061% ( 17) 00:19:10.767 21.411 - 21.527: 94.1459% ( 16) 00:19:10.767 21.527 - 21.644: 94.2420% ( 11) 00:19:10.767 21.644 - 21.760: 94.3818% ( 16) 00:19:10.767 21.760 - 21.876: 94.4779% ( 11) 00:19:10.767 21.876 - 21.993: 94.6177% ( 16) 00:19:10.767 21.993 - 22.109: 94.6964% ( 9) 00:19:10.767 22.109 - 22.225: 94.7663% ( 8) 00:19:10.767 22.225 - 22.342: 94.8711% ( 12) 00:19:10.767 22.342 - 22.458: 94.9498% ( 9) 00:19:10.767 22.458 - 22.575: 95.1332% ( 21) 00:19:10.767 22.575 - 22.691: 95.2381% ( 12) 00:19:10.767 22.691 - 22.807: 95.3080% ( 8) 00:19:10.767 22.807 - 22.924: 95.4128% ( 12) 00:19:10.767 22.924 - 23.040: 95.5002% ( 10) 00:19:10.767 23.040 - 23.156: 95.6313% ( 15) 00:19:10.767 23.156 - 23.273: 95.7449% ( 13) 00:19:10.767 23.273 - 23.389: 95.9021% ( 18) 00:19:10.767 23.389 - 23.505: 95.9720% ( 8) 00:19:10.767 23.505 - 23.622: 96.0856% ( 13) 00:19:10.767 23.622 - 23.738: 96.1468% ( 7) 00:19:10.767 23.738 - 23.855: 96.2516% ( 12) 00:19:10.767 23.855 - 23.971: 96.2779% ( 3) 00:19:10.767 23.971 - 24.087: 96.3740% ( 11) 00:19:10.767 24.087 - 24.204: 96.4439% ( 8) 00:19:10.767 24.204 - 24.320: 96.4963% ( 6) 00:19:10.767 24.320 - 24.436: 96.5837% ( 10) 00:19:10.767 24.436 - 24.553: 96.6623% ( 9) 00:19:10.767 24.553 - 24.669: 96.7322% ( 8) 00:19:10.767 24.669 - 24.785: 96.8545% ( 14) 00:19:10.767 24.785 - 24.902: 96.9069% ( 6) 00:19:10.767 24.902 - 25.018: 96.9856% ( 9) 00:19:10.767 25.018 - 25.135: 97.0992% ( 13) 00:19:10.767 25.135 - 25.251: 97.1603% ( 7) 00:19:10.767 25.251 - 25.367: 97.2215% ( 7) 00:19:10.767 25.367 - 25.484: 97.3526% ( 15) 00:19:10.767 25.484 - 25.600: 97.4399% ( 10) 00:19:10.767 25.600 - 25.716: 97.5098% ( 8) 00:19:10.767 25.716 - 25.833: 97.5360% ( 3) 00:19:10.767 25.833 - 25.949: 97.5885% ( 6) 00:19:10.767 25.949 - 26.065: 97.6322% ( 5) 00:19:10.767 26.065 - 26.182: 97.7021% ( 8) 00:19:10.767 26.182 - 26.298: 97.7632% ( 7) 00:19:10.767 26.298 - 26.415: 97.8244% ( 7) 00:19:10.767 26.415 - 26.531: 97.8681% ( 5) 00:19:10.767 26.531 - 26.647: 97.9380% ( 8) 00:19:10.767 26.647 - 26.764: 97.9554% ( 2) 00:19:10.767 26.764 - 26.880: 97.9904% ( 4) 00:19:10.767 26.880 - 26.996: 98.0079% ( 2) 00:19:10.767 26.996 - 27.113: 98.0516% ( 5) 00:19:10.767 27.113 - 27.229: 98.1215% ( 8) 00:19:10.767 27.229 - 27.345: 98.1826% ( 7) 00:19:10.767 27.345 - 27.462: 98.2001% ( 2) 00:19:10.767 27.462 - 27.578: 98.2438% ( 5) 00:19:10.767 27.578 - 27.695: 98.2875% ( 5) 00:19:10.767 27.695 - 27.811: 98.3137% ( 3) 00:19:10.767 27.811 - 27.927: 98.3574% ( 5) 00:19:10.767 27.927 - 28.044: 98.3661% ( 1) 00:19:10.767 28.044 - 28.160: 98.3923% ( 3) 00:19:10.767 28.160 - 28.276: 98.4098% ( 2) 00:19:10.767 28.276 - 28.393: 98.4622% ( 6) 00:19:10.767 28.393 - 28.509: 98.4972% ( 4) 00:19:10.767 28.509 - 28.625: 98.5059% ( 1) 00:19:10.767 28.625 - 28.742: 98.5408% ( 4) 00:19:10.767 28.742 - 28.858: 98.5583% ( 2) 00:19:10.767 28.858 - 28.975: 98.6282% ( 8) 00:19:10.767 28.975 - 29.091: 98.6719% ( 5) 00:19:10.767 29.091 - 29.207: 98.7418% ( 8) 00:19:10.767 29.207 - 29.324: 98.8554% ( 13) 00:19:10.767 29.324 - 29.440: 98.9428% ( 10) 00:19:10.767 29.440 - 29.556: 99.0564% ( 13) 00:19:10.767 29.556 - 29.673: 99.1612% ( 12) 00:19:10.767 29.673 - 29.789: 99.2311% ( 8) 00:19:10.767 29.789 - 30.022: 99.3360% ( 12) 00:19:10.767 30.022 - 30.255: 99.3884% ( 6) 00:19:10.767 30.255 - 30.487: 99.4146% ( 3) 00:19:10.767 30.487 - 30.720: 99.4321% ( 2) 00:19:10.767 30.720 - 30.953: 99.4408% ( 1) 00:19:10.767 30.953 - 31.185: 99.4758% ( 4) 00:19:10.767 31.185 - 31.418: 99.5020% ( 3) 00:19:10.767 31.418 - 31.651: 99.5544% ( 6) 00:19:10.767 31.651 - 31.884: 99.5631% ( 1) 00:19:10.767 31.884 - 32.116: 99.5806% ( 2) 00:19:10.767 32.116 - 32.349: 99.6068% ( 3) 00:19:10.767 32.349 - 32.582: 99.6243% ( 2) 00:19:10.767 32.582 - 32.815: 99.6418% ( 2) 00:19:10.767 32.815 - 33.047: 99.6505% ( 1) 00:19:10.767 33.047 - 33.280: 99.6680% ( 2) 00:19:10.767 33.280 - 33.513: 99.7117% ( 5) 00:19:10.767 33.513 - 33.745: 99.7204% ( 1) 00:19:10.767 34.444 - 34.676: 99.7554% ( 4) 00:19:10.767 35.607 - 35.840: 99.7641% ( 1) 00:19:10.767 36.305 - 36.538: 99.7728% ( 1) 00:19:10.767 36.538 - 36.771: 99.7816% ( 1) 00:19:10.767 36.771 - 37.004: 99.7903% ( 1) 00:19:10.767 37.004 - 37.236: 99.8078% ( 2) 00:19:10.767 37.469 - 37.702: 99.8165% ( 1) 00:19:10.767 37.935 - 38.167: 99.8253% ( 1) 00:19:10.767 38.400 - 38.633: 99.8340% ( 1) 00:19:10.767 38.633 - 38.865: 99.8602% ( 3) 00:19:10.767 38.865 - 39.098: 99.8689% ( 1) 00:19:10.767 39.098 - 39.331: 99.8777% ( 1) 00:19:10.767 42.356 - 42.589: 99.8864% ( 1) 00:19:10.767 43.287 - 43.520: 99.8952% ( 1) 00:19:10.767 44.684 - 44.916: 99.9126% ( 2) 00:19:10.767 44.916 - 45.149: 99.9214% ( 1) 00:19:10.767 45.149 - 45.382: 99.9301% ( 1) 00:19:10.767 49.571 - 49.804: 99.9388% ( 1) 00:19:10.767 53.295 - 53.527: 99.9476% ( 1) 00:19:10.767 53.993 - 54.225: 99.9563% ( 1) 00:19:10.767 57.018 - 57.251: 99.9651% ( 1) 00:19:10.767 60.975 - 61.440: 99.9738% ( 1) 00:19:10.767 74.473 - 74.938: 99.9913% ( 2) 00:19:10.767 82.385 - 82.851: 100.0000% ( 1) 00:19:10.767 00:19:10.767 Complete histogram 00:19:10.767 ================== 00:19:10.767 Range in us Cumulative Count 00:19:10.767 9.076 - 9.135: 0.0175% ( 2) 00:19:10.767 9.135 - 9.193: 0.0699% ( 6) 00:19:10.767 9.193 - 9.251: 0.1660% ( 11) 00:19:10.767 9.251 - 9.309: 0.2184% ( 6) 00:19:10.767 9.309 - 9.367: 0.3058% ( 10) 00:19:10.767 9.367 - 9.425: 0.5592% ( 29) 00:19:10.767 9.425 - 9.484: 1.1359% ( 66) 00:19:10.768 9.484 - 9.542: 1.5116% ( 43) 00:19:10.768 9.542 - 9.600: 1.8261% ( 36) 00:19:10.768 9.600 - 9.658: 2.2106% ( 44) 00:19:10.768 9.658 - 9.716: 4.8930% ( 307) 00:19:10.768 9.716 - 9.775: 14.2420% ( 1070) 00:19:10.768 9.775 - 9.833: 28.1957% ( 1597) 00:19:10.768 9.833 - 9.891: 42.1407% ( 1596) 00:19:10.768 9.891 - 9.949: 52.5644% ( 1193) 00:19:10.768 9.949 - 10.007: 59.2486% ( 765) 00:19:10.768 10.007 - 10.065: 62.5164% ( 374) 00:19:10.768 10.065 - 10.124: 64.3775% ( 213) 00:19:10.768 10.124 - 10.182: 65.7405% ( 156) 00:19:10.768 10.182 - 10.240: 66.6317% ( 102) 00:19:10.768 10.240 - 10.298: 67.2084% ( 66) 00:19:10.768 10.298 - 10.356: 67.5841% ( 43) 00:19:10.768 10.356 - 10.415: 67.9249% ( 39) 00:19:10.768 10.415 - 10.473: 68.1520% ( 26) 00:19:10.768 10.473 - 10.531: 68.3530% ( 23) 00:19:10.768 10.531 - 10.589: 68.6501% ( 34) 00:19:10.768 10.589 - 10.647: 68.9821% ( 38) 00:19:10.768 10.647 - 10.705: 69.3316% ( 40) 00:19:10.768 10.705 - 10.764: 69.8733% ( 62) 00:19:10.768 10.764 - 10.822: 70.4675% ( 68) 00:19:10.768 10.822 - 10.880: 71.0616% ( 68) 00:19:10.768 10.880 - 10.938: 71.8305% ( 88) 00:19:10.768 10.938 - 10.996: 72.1800% ( 40) 00:19:10.768 10.996 - 11.055: 72.5295% ( 40) 00:19:10.768 11.055 - 11.113: 72.8091% ( 32) 00:19:10.768 11.113 - 11.171: 73.0363% ( 26) 00:19:10.768 11.171 - 11.229: 73.2023% ( 19) 00:19:10.768 11.229 - 11.287: 73.3595% ( 18) 00:19:10.768 11.287 - 11.345: 73.4207% ( 7) 00:19:10.768 11.345 - 11.404: 73.5343% ( 13) 00:19:10.768 11.404 - 11.462: 73.5955% ( 7) 00:19:10.768 11.462 - 11.520: 73.6566% ( 7) 00:19:10.768 11.520 - 11.578: 73.6828% ( 3) 00:19:10.768 11.578 - 11.636: 73.7090% ( 3) 00:19:10.768 11.636 - 11.695: 73.8226% ( 13) 00:19:10.768 11.695 - 11.753: 74.4343% ( 70) 00:19:10.768 11.753 - 11.811: 76.2516% ( 208) 00:19:10.768 11.811 - 11.869: 79.4408% ( 365) 00:19:10.768 11.869 - 11.927: 83.3202% ( 444) 00:19:10.768 11.927 - 11.985: 85.9851% ( 305) 00:19:10.768 11.985 - 12.044: 88.0122% ( 232) 00:19:10.768 12.044 - 12.102: 89.3316% ( 151) 00:19:10.768 12.102 - 12.160: 89.8995% ( 65) 00:19:10.768 12.160 - 12.218: 90.3102% ( 47) 00:19:10.768 12.218 - 12.276: 90.6073% ( 34) 00:19:10.768 12.276 - 12.335: 90.7995% ( 22) 00:19:10.768 12.335 - 12.393: 91.0004% ( 23) 00:19:10.768 12.393 - 12.451: 91.0791% ( 9) 00:19:10.768 12.451 - 12.509: 91.1752% ( 11) 00:19:10.768 12.509 - 12.567: 91.2276% ( 6) 00:19:10.768 12.567 - 12.625: 91.3062% ( 9) 00:19:10.768 12.625 - 12.684: 91.4286% ( 14) 00:19:10.768 12.684 - 12.742: 91.5422% ( 13) 00:19:10.768 12.742 - 12.800: 91.6732% ( 15) 00:19:10.768 12.800 - 12.858: 91.8829% ( 24) 00:19:10.768 12.858 - 12.916: 92.1800% ( 34) 00:19:10.768 12.916 - 12.975: 92.3722% ( 22) 00:19:10.768 12.975 - 13.033: 92.5644% ( 22) 00:19:10.768 13.033 - 13.091: 92.8440% ( 32) 00:19:10.768 13.091 - 13.149: 93.0188% ( 20) 00:19:10.768 13.149 - 13.207: 93.1935% ( 20) 00:19:10.768 13.207 - 13.265: 93.3333% ( 16) 00:19:10.768 13.265 - 13.324: 93.5168% ( 21) 00:19:10.768 13.324 - 13.382: 93.6391% ( 14) 00:19:10.768 13.382 - 13.440: 93.7090% ( 8) 00:19:10.768 13.440 - 13.498: 93.7877% ( 9) 00:19:10.768 13.498 - 13.556: 93.8838% ( 11) 00:19:10.768 13.556 - 13.615: 93.9362% ( 6) 00:19:10.768 13.615 - 13.673: 93.9974% ( 7) 00:19:10.768 13.673 - 13.731: 94.0760% ( 9) 00:19:10.768 13.731 - 13.789: 94.1197% ( 5) 00:19:10.768 13.789 - 13.847: 94.1721% ( 6) 00:19:10.768 13.847 - 13.905: 94.2071% ( 4) 00:19:10.768 13.905 - 13.964: 94.2595% ( 6) 00:19:10.768 13.964 - 14.022: 94.2945% ( 4) 00:19:10.768 14.022 - 14.080: 94.3119% ( 2) 00:19:10.768 14.080 - 14.138: 94.3381% ( 3) 00:19:10.768 14.138 - 14.196: 94.3469% ( 1) 00:19:10.768 14.196 - 14.255: 94.3818% ( 4) 00:19:10.768 14.255 - 14.313: 94.4168% ( 4) 00:19:10.768 14.371 - 14.429: 94.4517% ( 4) 00:19:10.768 14.429 - 14.487: 94.4692% ( 2) 00:19:10.768 14.545 - 14.604: 94.5042% ( 4) 00:19:10.768 14.604 - 14.662: 94.5391% ( 4) 00:19:10.768 14.662 - 14.720: 94.5828% ( 5) 00:19:10.768 14.720 - 14.778: 94.5915% ( 1) 00:19:10.768 14.895 - 15.011: 94.6527% ( 7) 00:19:10.768 15.011 - 15.127: 94.6964% ( 5) 00:19:10.768 15.127 - 15.244: 94.7750% ( 9) 00:19:10.768 15.244 - 15.360: 94.8274% ( 6) 00:19:10.768 15.360 - 15.476: 94.9061% ( 9) 00:19:10.768 15.476 - 15.593: 94.9847% ( 9) 00:19:10.768 15.593 - 15.709: 95.0896% ( 12) 00:19:10.768 15.709 - 15.825: 95.2031% ( 13) 00:19:10.768 15.825 - 15.942: 95.3429% ( 16) 00:19:10.768 15.942 - 16.058: 95.4653% ( 14) 00:19:10.768 16.058 - 16.175: 95.6138% ( 17) 00:19:10.768 16.175 - 16.291: 95.8235% ( 24) 00:19:10.768 16.291 - 16.407: 95.9458% ( 14) 00:19:10.768 16.407 - 16.524: 96.0594% ( 13) 00:19:10.768 16.524 - 16.640: 96.1730% ( 13) 00:19:10.768 16.640 - 16.756: 96.2779% ( 12) 00:19:10.768 16.756 - 16.873: 96.3740% ( 11) 00:19:10.768 16.873 - 16.989: 96.4526% ( 9) 00:19:10.768 16.989 - 17.105: 96.5050% ( 6) 00:19:10.768 17.105 - 17.222: 96.5924% ( 10) 00:19:10.768 17.222 - 17.338: 96.7322% ( 16) 00:19:10.768 17.338 - 17.455: 96.8545% ( 14) 00:19:10.768 17.455 - 17.571: 96.9681% ( 13) 00:19:10.768 17.571 - 17.687: 97.0555% ( 10) 00:19:10.768 17.687 - 17.804: 97.1516% ( 11) 00:19:10.768 17.804 - 17.920: 97.2302% ( 9) 00:19:10.768 17.920 - 18.036: 97.3001% ( 8) 00:19:10.768 18.036 - 18.153: 97.3613% ( 7) 00:19:10.768 18.153 - 18.269: 97.4661% ( 12) 00:19:10.768 18.269 - 18.385: 97.5011% ( 4) 00:19:10.768 18.385 - 18.502: 97.5273% ( 3) 00:19:10.768 18.502 - 18.618: 97.5972% ( 8) 00:19:10.768 18.618 - 18.735: 97.6409% ( 5) 00:19:10.768 18.735 - 18.851: 97.7021% ( 7) 00:19:10.768 18.851 - 18.967: 97.7545% ( 6) 00:19:10.768 18.967 - 19.084: 97.7894% ( 4) 00:19:10.768 19.084 - 19.200: 97.8506% ( 7) 00:19:10.768 19.200 - 19.316: 97.9030% ( 6) 00:19:10.768 19.316 - 19.433: 97.9904% ( 10) 00:19:10.768 19.433 - 19.549: 98.0079% ( 2) 00:19:10.768 19.549 - 19.665: 98.0690% ( 7) 00:19:10.768 19.665 - 19.782: 98.1127% ( 5) 00:19:10.768 19.782 - 19.898: 98.1477% ( 4) 00:19:10.768 19.898 - 20.015: 98.1826% ( 4) 00:19:10.768 20.015 - 20.131: 98.2001% ( 2) 00:19:10.768 20.131 - 20.247: 98.2438% ( 5) 00:19:10.768 20.247 - 20.364: 98.3137% ( 8) 00:19:10.768 20.364 - 20.480: 98.3399% ( 3) 00:19:10.768 20.480 - 20.596: 98.3661% ( 3) 00:19:10.768 20.596 - 20.713: 98.3748% ( 1) 00:19:10.768 20.713 - 20.829: 98.4185% ( 5) 00:19:10.768 20.829 - 20.945: 98.4360% ( 2) 00:19:10.768 20.945 - 21.062: 98.5059% ( 8) 00:19:10.768 21.062 - 21.178: 98.5671% ( 7) 00:19:10.768 21.178 - 21.295: 98.6107% ( 5) 00:19:10.768 21.295 - 21.411: 98.6806% ( 8) 00:19:10.768 21.411 - 21.527: 98.7243% ( 5) 00:19:10.768 21.527 - 21.644: 98.7331% ( 1) 00:19:10.768 21.644 - 21.760: 98.7593% ( 3) 00:19:10.768 21.760 - 21.876: 98.7942% ( 4) 00:19:10.768 21.876 - 21.993: 98.8117% ( 2) 00:19:10.768 21.993 - 22.109: 98.8204% ( 1) 00:19:10.768 22.109 - 22.225: 98.8554% ( 4) 00:19:10.768 22.225 - 22.342: 98.8816% ( 3) 00:19:10.768 22.342 - 22.458: 98.9166% ( 4) 00:19:10.768 22.458 - 22.575: 98.9428% ( 3) 00:19:10.768 22.575 - 22.691: 98.9690% ( 3) 00:19:10.768 22.807 - 22.924: 98.9777% ( 1) 00:19:10.768 22.924 - 23.040: 99.0127% ( 4) 00:19:10.768 23.040 - 23.156: 99.0214% ( 1) 00:19:10.768 23.156 - 23.273: 99.0301% ( 1) 00:19:10.768 23.273 - 23.389: 99.0389% ( 1) 00:19:10.768 23.389 - 23.505: 99.0564% ( 2) 00:19:10.768 23.622 - 23.738: 99.0738% ( 2) 00:19:10.768 23.738 - 23.855: 99.0826% ( 1) 00:19:10.768 23.971 - 24.087: 99.0913% ( 1) 00:19:10.768 24.087 - 24.204: 99.1088% ( 2) 00:19:10.768 24.204 - 24.320: 99.1525% ( 5) 00:19:10.768 24.320 - 24.436: 99.1612% ( 1) 00:19:10.768 24.436 - 24.553: 99.2224% ( 7) 00:19:10.768 24.553 - 24.669: 99.2748% ( 6) 00:19:10.768 24.669 - 24.785: 99.3534% ( 9) 00:19:10.768 24.785 - 24.902: 99.4233% ( 8) 00:19:10.768 24.902 - 25.018: 99.4845% ( 7) 00:19:10.768 25.018 - 25.135: 99.5194% ( 4) 00:19:10.768 25.135 - 25.251: 99.5719% ( 6) 00:19:10.768 25.484 - 25.600: 99.5893% ( 2) 00:19:10.768 25.600 - 25.716: 99.5981% ( 1) 00:19:10.768 25.949 - 26.065: 99.6243% ( 3) 00:19:10.768 26.065 - 26.182: 99.6330% ( 1) 00:19:10.768 26.182 - 26.298: 99.6418% ( 1) 00:19:10.768 26.298 - 26.415: 99.6592% ( 2) 00:19:10.768 26.531 - 26.647: 99.6767% ( 2) 00:19:10.768 26.880 - 26.996: 99.6855% ( 1) 00:19:10.768 26.996 - 27.113: 99.6942% ( 1) 00:19:10.768 27.113 - 27.229: 99.7029% ( 1) 00:19:10.769 27.229 - 27.345: 99.7117% ( 1) 00:19:10.769 27.462 - 27.578: 99.7204% ( 1) 00:19:10.769 27.927 - 28.044: 99.7291% ( 1) 00:19:10.769 28.044 - 28.160: 99.7379% ( 1) 00:19:10.769 28.160 - 28.276: 99.7466% ( 1) 00:19:10.769 28.509 - 28.625: 99.7554% ( 1) 00:19:10.769 30.022 - 30.255: 99.7641% ( 1) 00:19:10.769 30.255 - 30.487: 99.7728% ( 1) 00:19:10.769 30.487 - 30.720: 99.7816% ( 1) 00:19:10.769 30.720 - 30.953: 99.7990% ( 2) 00:19:10.769 30.953 - 31.185: 99.8165% ( 2) 00:19:10.769 31.884 - 32.116: 99.8253% ( 1) 00:19:10.769 32.116 - 32.349: 99.8340% ( 1) 00:19:10.769 32.349 - 32.582: 99.8427% ( 1) 00:19:10.769 32.815 - 33.047: 99.8515% ( 1) 00:19:10.769 33.513 - 33.745: 99.8689% ( 2) 00:19:10.769 33.978 - 34.211: 99.8777% ( 1) 00:19:10.769 34.909 - 35.142: 99.8864% ( 1) 00:19:10.769 35.840 - 36.073: 99.8952% ( 1) 00:19:10.769 37.004 - 37.236: 99.9039% ( 1) 00:19:10.769 38.633 - 38.865: 99.9126% ( 1) 00:19:10.769 39.796 - 40.029: 99.9214% ( 1) 00:19:10.769 40.495 - 40.727: 99.9388% ( 2) 00:19:10.769 40.960 - 41.193: 99.9476% ( 1) 00:19:10.769 46.545 - 46.778: 99.9563% ( 1) 00:19:10.769 56.320 - 56.553: 99.9651% ( 1) 00:19:10.769 56.553 - 56.785: 99.9738% ( 1) 00:19:10.769 64.698 - 65.164: 99.9825% ( 1) 00:19:10.769 77.731 - 78.196: 99.9913% ( 1) 00:19:10.769 84.247 - 84.713: 100.0000% ( 1) 00:19:10.769 00:19:10.769 00:19:10.769 real 0m1.286s 00:19:10.769 user 0m1.112s 00:19:10.769 sys 0m0.123s 00:19:10.769 11:48:07 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:10.769 11:48:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:10.769 ************************************ 00:19:10.769 END TEST nvme_overhead 00:19:10.769 ************************************ 00:19:10.769 11:48:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:10.769 11:48:07 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:19:10.769 11:48:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:10.769 11:48:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.769 ************************************ 00:19:10.769 START TEST nvme_arbitration 00:19:10.769 ************************************ 00:19:10.769 11:48:07 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:14.062 Initializing NVMe Controllers 00:19:14.062 Attached to 0000:00:10.0 00:19:14.062 Attached to 0000:00:11.0 00:19:14.062 Attached to 0000:00:13.0 00:19:14.062 Attached to 0000:00:12.0 00:19:14.062 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:14.062 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:19:14.062 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:19:14.062 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:19:14.062 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:19:14.062 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:19:14.062 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:14.062 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:14.062 Initialization complete. Launching workers. 00:19:14.062 Starting thread on core 1 with urgent priority queue 00:19:14.062 Starting thread on core 2 with urgent priority queue 00:19:14.062 Starting thread on core 3 with urgent priority queue 00:19:14.062 Starting thread on core 0 with urgent priority queue 00:19:14.062 QEMU NVMe Ctrl (12340 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:19:14.062 QEMU NVMe Ctrl (12342 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:19:14.062 QEMU NVMe Ctrl (12341 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:19:14.062 QEMU NVMe Ctrl (12342 ) core 1: 640.00 IO/s 156.25 secs/100000 ios 00:19:14.062 QEMU NVMe Ctrl (12343 ) core 2: 640.00 IO/s 156.25 secs/100000 ios 00:19:14.062 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:19:14.062 ======================================================== 00:19:14.062 00:19:14.062 00:19:14.062 real 0m3.425s 00:19:14.062 user 0m9.370s 00:19:14.062 sys 0m0.149s 00:19:14.062 11:48:11 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.062 ************************************ 00:19:14.062 11:48:11 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:14.062 END TEST nvme_arbitration 00:19:14.062 ************************************ 00:19:14.062 11:48:11 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:14.062 11:48:11 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:14.062 11:48:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:14.062 11:48:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.062 ************************************ 00:19:14.062 START TEST nvme_single_aen 00:19:14.062 ************************************ 00:19:14.062 11:48:11 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:14.321 Asynchronous Event Request test 00:19:14.321 Attached to 0000:00:10.0 00:19:14.321 Attached to 0000:00:11.0 00:19:14.321 Attached to 0000:00:13.0 00:19:14.321 Attached to 0000:00:12.0 00:19:14.321 Reset controller to setup AER completions for this process 00:19:14.321 Registering asynchronous event callbacks... 00:19:14.321 Getting orig temperature thresholds of all controllers 00:19:14.321 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.321 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.321 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.321 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.321 Setting all controllers temperature threshold low to trigger AER 00:19:14.321 Waiting for all controllers temperature threshold to be set lower 00:19:14.321 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.321 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:14.321 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.321 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:19:14.321 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.321 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:19:14.321 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.321 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:19:14.321 Waiting for all controllers to trigger AER and reset threshold 00:19:14.321 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.321 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.321 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.321 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.321 Cleaning up... 00:19:14.321 00:19:14.321 real 0m0.288s 00:19:14.321 user 0m0.109s 00:19:14.321 sys 0m0.131s 00:19:14.321 11:48:11 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.321 11:48:11 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:14.321 ************************************ 00:19:14.321 END TEST nvme_single_aen 00:19:14.321 ************************************ 00:19:14.580 11:48:11 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:14.580 11:48:11 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:14.580 11:48:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:14.580 11:48:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.580 ************************************ 00:19:14.580 START TEST nvme_doorbell_aers 00:19:14.580 ************************************ 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:14.580 11:48:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:14.838 [2024-07-25 11:48:11.779927] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:24.843 Executing: test_write_invalid_db 00:19:24.843 Waiting for AER completion... 00:19:24.843 Failure: test_write_invalid_db 00:19:24.843 00:19:24.843 Executing: test_invalid_db_write_overflow_sq 00:19:24.843 Waiting for AER completion... 00:19:24.843 Failure: test_invalid_db_write_overflow_sq 00:19:24.843 00:19:24.843 Executing: test_invalid_db_write_overflow_cq 00:19:24.843 Waiting for AER completion... 00:19:24.843 Failure: test_invalid_db_write_overflow_cq 00:19:24.843 00:19:24.843 11:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:24.843 11:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:19:24.843 [2024-07-25 11:48:21.806650] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:34.825 Executing: test_write_invalid_db 00:19:34.825 Waiting for AER completion... 00:19:34.825 Failure: test_write_invalid_db 00:19:34.825 00:19:34.825 Executing: test_invalid_db_write_overflow_sq 00:19:34.825 Waiting for AER completion... 00:19:34.825 Failure: test_invalid_db_write_overflow_sq 00:19:34.825 00:19:34.825 Executing: test_invalid_db_write_overflow_cq 00:19:34.825 Waiting for AER completion... 00:19:34.825 Failure: test_invalid_db_write_overflow_cq 00:19:34.825 00:19:34.825 11:48:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:34.825 11:48:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:19:34.825 [2024-07-25 11:48:31.812960] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:44.812 Executing: test_write_invalid_db 00:19:44.812 Waiting for AER completion... 00:19:44.812 Failure: test_write_invalid_db 00:19:44.812 00:19:44.812 Executing: test_invalid_db_write_overflow_sq 00:19:44.812 Waiting for AER completion... 00:19:44.812 Failure: test_invalid_db_write_overflow_sq 00:19:44.812 00:19:44.812 Executing: test_invalid_db_write_overflow_cq 00:19:44.812 Waiting for AER completion... 00:19:44.812 Failure: test_invalid_db_write_overflow_cq 00:19:44.812 00:19:44.812 11:48:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:44.812 11:48:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:19:45.070 [2024-07-25 11:48:41.908235] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 Executing: test_write_invalid_db 00:19:55.041 Waiting for AER completion... 00:19:55.041 Failure: test_write_invalid_db 00:19:55.041 00:19:55.041 Executing: test_invalid_db_write_overflow_sq 00:19:55.041 Waiting for AER completion... 00:19:55.041 Failure: test_invalid_db_write_overflow_sq 00:19:55.041 00:19:55.041 Executing: test_invalid_db_write_overflow_cq 00:19:55.041 Waiting for AER completion... 00:19:55.041 Failure: test_invalid_db_write_overflow_cq 00:19:55.041 00:19:55.041 00:19:55.041 real 0m40.251s 00:19:55.041 user 0m33.936s 00:19:55.041 sys 0m5.888s 00:19:55.041 11:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.041 11:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:19:55.041 ************************************ 00:19:55.041 END TEST nvme_doorbell_aers 00:19:55.041 ************************************ 00:19:55.041 11:48:51 nvme -- nvme/nvme.sh@97 -- # uname 00:19:55.041 11:48:51 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:19:55.041 11:48:51 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:19:55.041 11:48:51 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:19:55.041 11:48:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.041 11:48:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.041 ************************************ 00:19:55.041 START TEST nvme_multi_aen 00:19:55.041 ************************************ 00:19:55.041 11:48:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:19:55.041 [2024-07-25 11:48:51.921847] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.921963] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.921988] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.923793] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.923851] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.923872] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.925347] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.925395] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.925427] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.926891] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.926941] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 [2024-07-25 11:48:51.926961] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68681) is not found. Dropping the request. 00:19:55.041 Child process pid: 69203 00:19:55.300 [Child] Asynchronous Event Request test 00:19:55.300 [Child] Attached to 0000:00:10.0 00:19:55.300 [Child] Attached to 0000:00:11.0 00:19:55.300 [Child] Attached to 0000:00:13.0 00:19:55.300 [Child] Attached to 0000:00:12.0 00:19:55.300 [Child] Registering asynchronous event callbacks... 00:19:55.300 [Child] Getting orig temperature thresholds of all controllers 00:19:55.300 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 [Child] Waiting for all controllers to trigger AER and reset threshold 00:19:55.300 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 [Child] Cleaning up... 00:19:55.300 Asynchronous Event Request test 00:19:55.300 Attached to 0000:00:10.0 00:19:55.300 Attached to 0000:00:11.0 00:19:55.300 Attached to 0000:00:13.0 00:19:55.300 Attached to 0000:00:12.0 00:19:55.300 Reset controller to setup AER completions for this process 00:19:55.300 Registering asynchronous event callbacks... 00:19:55.300 Getting orig temperature thresholds of all controllers 00:19:55.300 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.300 Setting all controllers temperature threshold low to trigger AER 00:19:55.300 Waiting for all controllers temperature threshold to be set lower 00:19:55.300 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:55.300 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:19:55.300 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:19:55.300 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.300 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:19:55.300 Waiting for all controllers to trigger AER and reset threshold 00:19:55.300 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.300 Cleaning up... 00:19:55.300 00:19:55.300 real 0m0.558s 00:19:55.300 user 0m0.203s 00:19:55.300 sys 0m0.244s 00:19:55.300 11:48:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.300 ************************************ 00:19:55.300 END TEST nvme_multi_aen 00:19:55.300 ************************************ 00:19:55.300 11:48:52 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 11:48:52 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:19:55.300 11:48:52 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:55.300 11:48:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.300 11:48:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 ************************************ 00:19:55.300 START TEST nvme_startup 00:19:55.300 ************************************ 00:19:55.300 11:48:52 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:19:55.559 Initializing NVMe Controllers 00:19:55.559 Attached to 0000:00:10.0 00:19:55.559 Attached to 0000:00:11.0 00:19:55.559 Attached to 0000:00:13.0 00:19:55.559 Attached to 0000:00:12.0 00:19:55.559 Initialization complete. 00:19:55.559 Time used:179158.406 (us). 00:19:55.559 00:19:55.559 real 0m0.268s 00:19:55.559 user 0m0.096s 00:19:55.559 sys 0m0.127s 00:19:55.559 11:48:52 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.559 ************************************ 00:19:55.559 END TEST nvme_startup 00:19:55.559 ************************************ 00:19:55.559 11:48:52 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:19:55.817 11:48:52 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:19:55.817 11:48:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:55.817 11:48:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.817 11:48:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.817 ************************************ 00:19:55.817 START TEST nvme_multi_secondary 00:19:55.817 ************************************ 00:19:55.817 11:48:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:19:55.817 11:48:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69254 00:19:55.817 11:48:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:19:55.817 11:48:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69255 00:19:55.817 11:48:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:19:55.817 11:48:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:19:59.101 Initializing NVMe Controllers 00:19:59.101 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:59.101 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:59.101 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:59.101 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:59.101 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:19:59.101 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:19:59.101 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:19:59.101 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:19:59.101 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:19:59.101 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:19:59.101 Initialization complete. Launching workers. 00:19:59.101 ======================================================== 00:19:59.101 Latency(us) 00:19:59.101 Device Information : IOPS MiB/s Average min max 00:19:59.101 PCIE (0000:00:10.0) NSID 1 from core 1: 5009.40 19.57 3192.22 1332.90 7530.71 00:19:59.101 PCIE (0000:00:11.0) NSID 1 from core 1: 5009.40 19.57 3194.09 1374.56 7348.22 00:19:59.101 PCIE (0000:00:13.0) NSID 1 from core 1: 5009.40 19.57 3194.49 1464.86 7453.87 00:19:59.101 PCIE (0000:00:12.0) NSID 1 from core 1: 5009.40 19.57 3194.72 1409.72 6835.64 00:19:59.101 PCIE (0000:00:12.0) NSID 2 from core 1: 5009.40 19.57 3194.69 1367.91 6380.58 00:19:59.101 PCIE (0000:00:12.0) NSID 3 from core 1: 5009.40 19.57 3194.70 1428.51 7436.57 00:19:59.101 ======================================================== 00:19:59.101 Total : 30056.42 117.41 3194.15 1332.90 7530.71 00:19:59.101 00:19:59.359 Initializing NVMe Controllers 00:19:59.359 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:59.359 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:59.359 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:59.359 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:59.359 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:19:59.359 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:19:59.359 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:19:59.359 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:19:59.359 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:19:59.359 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:19:59.359 Initialization complete. Launching workers. 00:19:59.359 ======================================================== 00:19:59.359 Latency(us) 00:19:59.359 Device Information : IOPS MiB/s Average min max 00:19:59.359 PCIE (0000:00:10.0) NSID 1 from core 2: 2174.65 8.49 7355.02 1557.72 15737.02 00:19:59.359 PCIE (0000:00:11.0) NSID 1 from core 2: 2174.65 8.49 7357.16 1516.51 19904.96 00:19:59.359 PCIE (0000:00:13.0) NSID 1 from core 2: 2174.65 8.49 7357.03 1678.61 19885.94 00:19:59.359 PCIE (0000:00:12.0) NSID 1 from core 2: 2174.65 8.49 7357.01 1436.56 19971.48 00:19:59.359 PCIE (0000:00:12.0) NSID 2 from core 2: 2174.65 8.49 7357.21 1462.27 20142.12 00:19:59.359 PCIE (0000:00:12.0) NSID 3 from core 2: 2174.65 8.49 7356.53 1429.54 19694.08 00:19:59.359 ======================================================== 00:19:59.359 Total : 13047.90 50.97 7356.66 1429.54 20142.12 00:19:59.359 00:19:59.359 11:48:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69254 00:20:01.258 Initializing NVMe Controllers 00:20:01.258 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:01.258 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:01.258 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:01.258 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:01.258 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:01.258 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:01.258 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:01.258 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:01.258 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:01.258 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:01.258 Initialization complete. Launching workers. 00:20:01.258 ======================================================== 00:20:01.258 Latency(us) 00:20:01.258 Device Information : IOPS MiB/s Average min max 00:20:01.258 PCIE (0000:00:10.0) NSID 1 from core 0: 7752.67 30.28 2062.18 945.90 8272.44 00:20:01.258 PCIE (0000:00:11.0) NSID 1 from core 0: 7752.67 30.28 2063.30 992.83 8111.28 00:20:01.258 PCIE (0000:00:13.0) NSID 1 from core 0: 7752.67 30.28 2063.26 982.00 7936.34 00:20:01.258 PCIE (0000:00:12.0) NSID 1 from core 0: 7752.67 30.28 2063.21 947.52 7903.44 00:20:01.258 PCIE (0000:00:12.0) NSID 2 from core 0: 7752.67 30.28 2063.17 884.83 8133.53 00:20:01.258 PCIE (0000:00:12.0) NSID 3 from core 0: 7752.67 30.28 2063.13 847.97 8124.05 00:20:01.258 ======================================================== 00:20:01.258 Total : 46516.05 181.70 2063.04 847.97 8272.44 00:20:01.258 00:20:01.258 11:48:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69255 00:20:01.258 11:48:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69324 00:20:01.258 11:48:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:20:01.258 11:48:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69325 00:20:01.258 11:48:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:20:01.258 11:48:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:20:04.588 Initializing NVMe Controllers 00:20:04.588 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:04.588 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:04.588 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:04.588 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:04.588 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:04.588 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:04.588 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:04.588 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:04.588 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:04.588 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:04.588 Initialization complete. Launching workers. 00:20:04.588 ======================================================== 00:20:04.588 Latency(us) 00:20:04.588 Device Information : IOPS MiB/s Average min max 00:20:04.588 PCIE (0000:00:10.0) NSID 1 from core 0: 5426.54 21.20 2946.52 953.67 6557.73 00:20:04.588 PCIE (0000:00:11.0) NSID 1 from core 0: 5426.54 21.20 2948.00 977.09 6421.20 00:20:04.588 PCIE (0000:00:13.0) NSID 1 from core 0: 5426.54 21.20 2948.04 983.14 7199.07 00:20:04.588 PCIE (0000:00:12.0) NSID 1 from core 0: 5426.54 21.20 2947.97 1010.57 7917.96 00:20:04.588 PCIE (0000:00:12.0) NSID 2 from core 0: 5426.54 21.20 2948.14 998.89 7425.10 00:20:04.588 PCIE (0000:00:12.0) NSID 3 from core 0: 5426.54 21.20 2948.13 975.89 7549.00 00:20:04.588 ======================================================== 00:20:04.588 Total : 32559.24 127.18 2947.80 953.67 7917.96 00:20:04.588 00:20:04.588 Initializing NVMe Controllers 00:20:04.588 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:04.588 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:04.588 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:04.588 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:04.588 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:20:04.588 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:20:04.588 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:20:04.588 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:20:04.588 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:20:04.588 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:20:04.588 Initialization complete. Launching workers. 00:20:04.588 ======================================================== 00:20:04.588 Latency(us) 00:20:04.588 Device Information : IOPS MiB/s Average min max 00:20:04.588 PCIE (0000:00:10.0) NSID 1 from core 1: 5203.34 20.33 3072.84 1113.06 6982.42 00:20:04.588 PCIE (0000:00:11.0) NSID 1 from core 1: 5203.34 20.33 3074.06 1102.68 6724.47 00:20:04.588 PCIE (0000:00:13.0) NSID 1 from core 1: 5203.34 20.33 3073.90 1133.10 6556.77 00:20:04.588 PCIE (0000:00:12.0) NSID 1 from core 1: 5203.34 20.33 3073.74 1126.59 6222.14 00:20:04.588 PCIE (0000:00:12.0) NSID 2 from core 1: 5203.34 20.33 3073.41 1130.13 6325.93 00:20:04.588 PCIE (0000:00:12.0) NSID 3 from core 1: 5203.34 20.33 3073.04 1146.69 6312.98 00:20:04.588 ======================================================== 00:20:04.588 Total : 31220.02 121.95 3073.50 1102.68 6982.42 00:20:04.588 00:20:06.487 Initializing NVMe Controllers 00:20:06.487 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:06.487 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:06.487 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:06.487 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:06.487 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:20:06.487 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:20:06.487 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:20:06.487 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:20:06.487 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:20:06.487 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:20:06.487 Initialization complete. Launching workers. 00:20:06.487 ======================================================== 00:20:06.487 Latency(us) 00:20:06.487 Device Information : IOPS MiB/s Average min max 00:20:06.487 PCIE (0000:00:10.0) NSID 1 from core 2: 3666.92 14.32 4360.34 968.85 18657.97 00:20:06.487 PCIE (0000:00:11.0) NSID 1 from core 2: 3666.92 14.32 4362.87 918.11 20486.04 00:20:06.487 PCIE (0000:00:13.0) NSID 1 from core 2: 3666.92 14.32 4362.77 985.15 23382.55 00:20:06.487 PCIE (0000:00:12.0) NSID 1 from core 2: 3666.92 14.32 4359.40 979.00 22985.60 00:20:06.487 PCIE (0000:00:12.0) NSID 2 from core 2: 3670.11 14.34 4355.09 986.47 19068.88 00:20:06.487 PCIE (0000:00:12.0) NSID 3 from core 2: 3670.11 14.34 4354.57 982.00 19115.92 00:20:06.487 ======================================================== 00:20:06.487 Total : 22007.89 85.97 4359.17 918.11 23382.55 00:20:06.487 00:20:06.487 11:49:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69324 00:20:06.487 11:49:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69325 00:20:06.487 00:20:06.487 real 0m10.683s 00:20:06.487 user 0m18.608s 00:20:06.487 sys 0m0.860s 00:20:06.487 11:49:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.487 11:49:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:20:06.487 ************************************ 00:20:06.487 END TEST nvme_multi_secondary 00:20:06.487 ************************************ 00:20:06.487 11:49:03 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:20:06.487 11:49:03 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:20:06.487 11:49:03 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/68266 ]] 00:20:06.487 11:49:03 nvme -- common/autotest_common.sh@1090 -- # kill 68266 00:20:06.487 11:49:03 nvme -- common/autotest_common.sh@1091 -- # wait 68266 00:20:06.487 [2024-07-25 11:49:03.351405] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.352071] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.352130] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.352158] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.354413] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.354476] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.354516] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.354560] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.356772] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.356835] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.356859] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.356881] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.359059] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.359119] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.487 [2024-07-25 11:49:03.359143] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.488 [2024-07-25 11:49:03.359165] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69202) is not found. Dropping the request. 00:20:06.745 11:49:03 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:20:06.745 11:49:03 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:20:06.745 11:49:03 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:06.745 11:49:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:06.745 11:49:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.745 11:49:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:06.745 ************************************ 00:20:06.745 START TEST bdev_nvme_reset_stuck_adm_cmd 00:20:06.746 ************************************ 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:06.746 * Looking for test storage... 00:20:06.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:06.746 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69482 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69482 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69482 ']' 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.004 11:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:07.004 [2024-07-25 11:49:03.904988] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:07.004 [2024-07-25 11:49:03.905136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69482 ] 00:20:07.263 [2024-07-25 11:49:04.085763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.263 [2024-07-25 11:49:04.296548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.263 [2024-07-25 11:49:04.296716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.520 [2024-07-25 11:49:04.297022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.520 [2024-07-25 11:49:04.297172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.086 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.086 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:20:08.086 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:20:08.086 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.086 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:08.353 nvme0n1 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_kv0Nz.txt 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:08.353 true 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721908145 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69506 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:20:08.353 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:08.354 11:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:10.254 [2024-07-25 11:49:07.190747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:10.254 [2024-07-25 11:49:07.191209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:10.254 [2024-07-25 11:49:07.191258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:10.254 [2024-07-25 11:49:07.191294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.254 [2024-07-25 11:49:07.193656] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.254 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69506 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69506 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69506 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_kv0Nz.txt 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:10.254 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_kv0Nz.txt 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69482 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69482 ']' 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69482 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69482 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.514 killing process with pid 69482 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69482' 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69482 00:20:10.514 11:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69482 00:20:12.417 11:49:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:20:12.417 11:49:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:20:12.417 00:20:12.417 real 0m5.773s 00:20:12.417 user 0m20.092s 00:20:12.417 sys 0m0.536s 00:20:12.417 11:49:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:12.417 11:49:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:12.417 ************************************ 00:20:12.417 END TEST bdev_nvme_reset_stuck_adm_cmd 00:20:12.417 ************************************ 00:20:12.676 11:49:09 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:20:12.676 11:49:09 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:20:12.676 11:49:09 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:12.676 11:49:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:12.676 11:49:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.676 ************************************ 00:20:12.677 START TEST nvme_fio 00:20:12.677 ************************************ 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:20:12.677 11:49:09 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:12.677 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:12.935 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:12.935 11:49:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:13.194 11:49:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:13.194 11:49:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:13.194 11:49:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:13.453 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:13.453 fio-3.35 00:20:13.453 Starting 1 thread 00:20:16.739 00:20:16.739 test: (groupid=0, jobs=1): err= 0: pid=69657: Thu Jul 25 11:49:13 2024 00:20:16.739 read: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(117MiB/2001msec) 00:20:16.739 slat (nsec): min=4719, max=65046, avg=6700.96, stdev=2188.65 00:20:16.739 clat (usec): min=290, max=9703, avg=4265.07, stdev=699.59 00:20:16.739 lat (usec): min=297, max=9768, avg=4271.77, stdev=700.51 00:20:16.739 clat percentiles (usec): 00:20:16.739 | 1.00th=[ 3032], 5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3687], 00:20:16.739 | 30.00th=[ 3785], 40.00th=[ 3982], 50.00th=[ 4293], 60.00th=[ 4424], 00:20:16.739 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 5473], 00:20:16.739 | 99.00th=[ 6456], 99.50th=[ 7308], 99.90th=[ 8848], 99.95th=[ 8979], 00:20:16.739 | 99.99th=[ 9634] 00:20:16.739 bw ( KiB/s): min=52760, max=69080, per=100.00%, avg=60890.67, stdev=8160.16, samples=3 00:20:16.739 iops : min=13190, max=17270, avg=15222.67, stdev=2040.04, samples=3 00:20:16.739 write: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(117MiB/2001msec); 0 zone resets 00:20:16.739 slat (nsec): min=4783, max=38709, avg=6842.30, stdev=2117.02 00:20:16.739 clat (usec): min=327, max=9474, avg=4268.01, stdev=690.41 00:20:16.739 lat (usec): min=334, max=9493, avg=4274.85, stdev=691.28 00:20:16.739 clat percentiles (usec): 00:20:16.739 | 1.00th=[ 3032], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3687], 00:20:16.739 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4293], 60.00th=[ 4424], 00:20:16.739 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 5473], 00:20:16.739 | 99.00th=[ 6390], 99.50th=[ 7111], 99.90th=[ 8848], 99.95th=[ 8979], 00:20:16.739 | 99.99th=[ 9372] 00:20:16.739 bw ( KiB/s): min=52984, max=68008, per=100.00%, avg=60533.33, stdev=7512.28, samples=3 00:20:16.739 iops : min=13246, max=17002, avg=15133.33, stdev=1878.07, samples=3 00:20:16.739 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:20:16.739 lat (msec) : 2=0.05%, 4=40.55%, 10=59.36% 00:20:16.739 cpu : usr=99.00%, sys=0.00%, ctx=3, majf=0, minf=606 00:20:16.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:16.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.739 issued rwts: total=29899,29896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.739 00:20:16.739 Run status group 0 (all jobs): 00:20:16.739 READ: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=117MiB (122MB), run=2001-2001msec 00:20:16.739 WRITE: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=117MiB (122MB), run=2001-2001msec 00:20:16.739 ----------------------------------------------------- 00:20:16.739 Suppressions used: 00:20:16.739 count bytes template 00:20:16.739 1 32 /usr/src/fio/parse.c 00:20:16.739 1 8 libtcmalloc_minimal.so 00:20:16.739 ----------------------------------------------------- 00:20:16.739 00:20:16.739 11:49:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:16.739 11:49:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:16.739 11:49:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:16.739 11:49:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:16.997 11:49:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:16.997 11:49:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:17.256 11:49:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:17.256 11:49:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:17.256 11:49:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:17.515 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:17.515 fio-3.35 00:20:17.515 Starting 1 thread 00:20:20.799 00:20:20.799 test: (groupid=0, jobs=1): err= 0: pid=69718: Thu Jul 25 11:49:17 2024 00:20:20.799 read: IOPS=14.6k, BW=57.0MiB/s (59.7MB/s)(114MiB/2001msec) 00:20:20.799 slat (nsec): min=4658, max=88043, avg=6812.85, stdev=2285.40 00:20:20.799 clat (usec): min=303, max=10000, avg=4362.77, stdev=721.77 00:20:20.799 lat (usec): min=309, max=10056, avg=4369.59, stdev=722.61 00:20:20.799 clat percentiles (usec): 00:20:20.799 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3982], 00:20:20.799 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:20:20.799 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5473], 00:20:20.799 | 99.00th=[ 7570], 99.50th=[ 8029], 99.90th=[ 8455], 99.95th=[ 8717], 00:20:20.799 | 99.99th=[ 9896] 00:20:20.799 bw ( KiB/s): min=57640, max=62712, per=100.00%, avg=59429.33, stdev=2846.72, samples=3 00:20:20.799 iops : min=14410, max=15678, avg=14857.33, stdev=711.68, samples=3 00:20:20.799 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(114MiB/2001msec); 0 zone resets 00:20:20.799 slat (nsec): min=4762, max=51159, avg=6983.17, stdev=2221.49 00:20:20.799 clat (usec): min=314, max=9773, avg=4368.84, stdev=723.37 00:20:20.799 lat (usec): min=320, max=9791, avg=4375.83, stdev=724.17 00:20:20.799 clat percentiles (usec): 00:20:20.799 | 1.00th=[ 3195], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3982], 00:20:20.799 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:20:20.799 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5538], 00:20:20.799 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[ 8455], 99.95th=[ 8717], 00:20:20.799 | 99.99th=[ 9634] 00:20:20.799 bw ( KiB/s): min=56944, max=62600, per=100.00%, avg=59242.67, stdev=2972.91, samples=3 00:20:20.799 iops : min=14236, max=15650, avg=14810.67, stdev=743.23, samples=3 00:20:20.799 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:20:20.799 lat (msec) : 2=0.04%, 4=20.33%, 10=79.59%, 20=0.01% 00:20:20.799 cpu : usr=98.80%, sys=0.15%, ctx=5, majf=0, minf=607 00:20:20.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:20.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.799 issued rwts: total=29185,29252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.799 00:20:20.799 Run status group 0 (all jobs): 00:20:20.799 READ: bw=57.0MiB/s (59.7MB/s), 57.0MiB/s-57.0MiB/s (59.7MB/s-59.7MB/s), io=114MiB (120MB), run=2001-2001msec 00:20:20.799 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=114MiB (120MB), run=2001-2001msec 00:20:20.799 ----------------------------------------------------- 00:20:20.799 Suppressions used: 00:20:20.799 count bytes template 00:20:20.799 1 32 /usr/src/fio/parse.c 00:20:20.799 1 8 libtcmalloc_minimal.so 00:20:20.799 ----------------------------------------------------- 00:20:20.799 00:20:20.799 11:49:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:20.799 11:49:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:20.799 11:49:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:20.799 11:49:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:21.057 11:49:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:21.057 11:49:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:21.316 11:49:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:21.316 11:49:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:21.316 11:49:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:21.575 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:21.575 fio-3.35 00:20:21.575 Starting 1 thread 00:20:25.761 00:20:25.761 test: (groupid=0, jobs=1): err= 0: pid=69779: Thu Jul 25 11:49:22 2024 00:20:25.761 read: IOPS=15.9k, BW=62.2MiB/s (65.2MB/s)(124MiB/2001msec) 00:20:25.761 slat (nsec): min=4643, max=50027, avg=6070.42, stdev=1745.71 00:20:25.761 clat (usec): min=238, max=11068, avg=3998.55, stdev=567.67 00:20:25.761 lat (usec): min=245, max=11118, avg=4004.63, stdev=568.37 00:20:25.761 clat percentiles (usec): 00:20:25.761 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3621], 00:20:25.761 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3949], 00:20:25.761 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4686], 00:20:25.761 | 99.00th=[ 6390], 99.50th=[ 7046], 99.90th=[ 8225], 99.95th=[ 9110], 00:20:25.761 | 99.99th=[10814] 00:20:25.761 bw ( KiB/s): min=60232, max=69000, per=100.00%, avg=64616.00, stdev=4384.00, samples=3 00:20:25.761 iops : min=15058, max=17250, avg=16154.00, stdev=1096.00, samples=3 00:20:25.761 write: IOPS=15.9k, BW=62.3MiB/s (65.3MB/s)(125MiB/2001msec); 0 zone resets 00:20:25.761 slat (nsec): min=4720, max=68797, avg=6249.89, stdev=1870.59 00:20:25.761 clat (usec): min=319, max=10838, avg=4006.79, stdev=555.85 00:20:25.761 lat (usec): min=325, max=10855, avg=4013.04, stdev=556.54 00:20:25.761 clat percentiles (usec): 00:20:25.761 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:20:25.761 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3949], 00:20:25.761 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4686], 00:20:25.761 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 8291], 99.95th=[ 9241], 00:20:25.761 | 99.99th=[10552] 00:20:25.761 bw ( KiB/s): min=60664, max=68232, per=100.00%, avg=64352.00, stdev=3787.65, samples=3 00:20:25.761 iops : min=15166, max=17058, avg=16088.00, stdev=946.91, samples=3 00:20:25.761 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:20:25.761 lat (msec) : 2=0.05%, 4=62.73%, 10=37.14%, 20=0.03% 00:20:25.761 cpu : usr=98.85%, sys=0.20%, ctx=4, majf=0, minf=606 00:20:25.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:25.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.761 issued rwts: total=31854,31890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.761 00:20:25.761 Run status group 0 (all jobs): 00:20:25.761 READ: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=124MiB (130MB), run=2001-2001msec 00:20:25.761 WRITE: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=125MiB (131MB), run=2001-2001msec 00:20:25.761 ----------------------------------------------------- 00:20:25.761 Suppressions used: 00:20:25.761 count bytes template 00:20:25.761 1 32 /usr/src/fio/parse.c 00:20:25.761 1 8 libtcmalloc_minimal.so 00:20:25.761 ----------------------------------------------------- 00:20:25.761 00:20:25.761 11:49:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:25.761 11:49:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:25.761 11:49:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:25.761 11:49:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:26.019 11:49:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:26.019 11:49:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:26.278 11:49:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:26.278 11:49:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:26.279 11:49:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:26.279 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:26.279 fio-3.35 00:20:26.279 Starting 1 thread 00:20:30.463 00:20:30.463 test: (groupid=0, jobs=1): err= 0: pid=69845: Thu Jul 25 11:49:27 2024 00:20:30.463 read: IOPS=15.3k, BW=59.6MiB/s (62.5MB/s)(119MiB/2001msec) 00:20:30.463 slat (nsec): min=4607, max=80214, avg=6189.86, stdev=1904.58 00:20:30.463 clat (usec): min=291, max=10769, avg=4164.91, stdev=582.75 00:20:30.463 lat (usec): min=297, max=10812, avg=4171.10, stdev=583.49 00:20:30.463 clat percentiles (usec): 00:20:30.463 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3654], 00:20:30.463 | 30.00th=[ 3752], 40.00th=[ 3916], 50.00th=[ 4293], 60.00th=[ 4424], 00:20:30.463 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:20:30.463 | 99.00th=[ 6259], 99.50th=[ 7242], 99.90th=[ 8094], 99.95th=[ 9110], 00:20:30.463 | 99.99th=[10552] 00:20:30.463 bw ( KiB/s): min=57448, max=65280, per=100.00%, avg=61586.67, stdev=3934.95, samples=3 00:20:30.463 iops : min=14362, max=16320, avg=15396.67, stdev=983.74, samples=3 00:20:30.463 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(119MiB/2001msec); 0 zone resets 00:20:30.463 slat (usec): min=4, max=108, avg= 6.36, stdev= 2.00 00:20:30.463 clat (usec): min=381, max=10634, avg=4182.30, stdev=595.31 00:20:30.463 lat (usec): min=386, max=10651, avg=4188.66, stdev=596.04 00:20:30.463 clat percentiles (usec): 00:20:30.463 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:20:30.463 | 30.00th=[ 3752], 40.00th=[ 3949], 50.00th=[ 4293], 60.00th=[ 4424], 00:20:30.463 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:20:30.463 | 99.00th=[ 6521], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 9110], 00:20:30.463 | 99.99th=[10421] 00:20:30.463 bw ( KiB/s): min=57816, max=64216, per=100.00%, avg=61160.00, stdev=3209.71, samples=3 00:20:30.463 iops : min=14454, max=16054, avg=15290.00, stdev=802.43, samples=3 00:20:30.463 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:20:30.463 lat (msec) : 2=0.08%, 4=42.12%, 10=57.73%, 20=0.03% 00:20:30.463 cpu : usr=98.90%, sys=0.15%, ctx=5, majf=0, minf=604 00:20:30.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:30.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.463 issued rwts: total=30541,30588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.463 00:20:30.463 Run status group 0 (all jobs): 00:20:30.463 READ: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2001-2001msec 00:20:30.463 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=119MiB (125MB), run=2001-2001msec 00:20:30.463 ----------------------------------------------------- 00:20:30.463 Suppressions used: 00:20:30.463 count bytes template 00:20:30.463 1 32 /usr/src/fio/parse.c 00:20:30.463 1 8 libtcmalloc_minimal.so 00:20:30.463 ----------------------------------------------------- 00:20:30.463 00:20:30.463 11:49:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:30.463 11:49:27 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:20:30.463 00:20:30.463 real 0m17.800s 00:20:30.463 user 0m13.778s 00:20:30.463 sys 0m3.248s 00:20:30.463 11:49:27 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.463 11:49:27 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:20:30.463 ************************************ 00:20:30.463 END TEST nvme_fio 00:20:30.463 ************************************ 00:20:30.463 00:20:30.463 real 1m31.116s 00:20:30.463 user 3m44.091s 00:20:30.463 sys 0m15.154s 00:20:30.463 11:49:27 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.463 11:49:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:30.463 ************************************ 00:20:30.463 END TEST nvme 00:20:30.463 ************************************ 00:20:30.463 11:49:27 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:20:30.463 11:49:27 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:30.463 11:49:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:30.463 11:49:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.463 11:49:27 -- common/autotest_common.sh@10 -- # set +x 00:20:30.463 ************************************ 00:20:30.463 START TEST nvme_scc 00:20:30.463 ************************************ 00:20:30.463 11:49:27 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:30.463 * Looking for test storage... 00:20:30.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:30.463 11:49:27 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:30.463 11:49:27 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.463 11:49:27 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.463 11:49:27 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.463 11:49:27 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.463 11:49:27 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.463 11:49:27 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.463 11:49:27 nvme_scc -- paths/export.sh@5 -- # export PATH 00:20:30.463 11:49:27 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:30.463 11:49:27 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:20:30.463 11:49:27 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.463 11:49:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:20:30.463 11:49:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:20:30.463 11:49:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:20:30.463 11:49:27 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:31.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.029 Waiting for block devices as requested 00:20:31.287 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.287 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.287 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.544 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.812 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:36.812 11:49:33 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:36.812 11:49:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:36.812 11:49:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:36.812 11:49:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:36.812 11:49:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.812 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.813 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:36.814 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:36.815 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:36.816 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:36.817 11:49:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:36.817 11:49:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:36.817 11:49:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:36.817 11:49:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:36.817 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:36.818 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.819 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:36.820 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.821 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:36.822 11:49:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:36.822 11:49:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:20:36.822 11:49:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:36.822 11:49:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.822 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:36.823 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.824 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.825 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.826 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.827 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:36.828 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.829 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.830 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:36.831 11:49:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:36.831 11:49:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:20:36.831 11:49:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:36.831 11:49:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:36.831 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:36.832 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:37.092 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:37.093 11:49:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:20:37.093 11:49:33 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:20:37.093 11:49:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:20:37.093 11:49:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:20:37.093 11:49:33 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:37.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.287 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.287 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.287 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.287 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.287 11:49:35 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:38.287 11:49:35 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:38.287 11:49:35 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.287 11:49:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:38.287 ************************************ 00:20:38.287 START TEST nvme_simple_copy 00:20:38.287 ************************************ 00:20:38.287 11:49:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:38.545 Initializing NVMe Controllers 00:20:38.545 Attaching to 0000:00:10.0 00:20:38.546 Controller supports SCC. Attached to 0000:00:10.0 00:20:38.546 Namespace ID: 1 size: 6GB 00:20:38.546 Initialization complete. 00:20:38.546 00:20:38.546 Controller QEMU NVMe Ctrl (12340 ) 00:20:38.546 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:20:38.546 Namespace Block Size:4096 00:20:38.546 Writing LBAs 0 to 63 with Random Data 00:20:38.546 Copied LBAs from 0 - 63 to the Destination LBA 256 00:20:38.546 LBAs matching Written Data: 64 00:20:38.546 00:20:38.546 real 0m0.351s 00:20:38.546 user 0m0.147s 00:20:38.546 sys 0m0.101s 00:20:38.546 11:49:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.546 11:49:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:20:38.546 ************************************ 00:20:38.546 END TEST nvme_simple_copy 00:20:38.546 ************************************ 00:20:38.546 00:20:38.546 real 0m8.155s 00:20:38.546 user 0m1.414s 00:20:38.546 sys 0m1.684s 00:20:38.546 11:49:35 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.546 11:49:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:38.546 ************************************ 00:20:38.546 END TEST nvme_scc 00:20:38.546 ************************************ 00:20:38.804 11:49:35 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:20:38.804 11:49:35 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:20:38.804 11:49:35 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:20:38.804 11:49:35 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:20:38.804 11:49:35 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:20:38.804 11:49:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:38.804 11:49:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.804 11:49:35 -- common/autotest_common.sh@10 -- # set +x 00:20:38.804 ************************************ 00:20:38.804 START TEST nvme_fdp 00:20:38.804 ************************************ 00:20:38.804 11:49:35 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:20:38.804 * Looking for test storage... 00:20:38.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:38.804 11:49:35 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.804 11:49:35 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.804 11:49:35 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.804 11:49:35 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.804 11:49:35 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.804 11:49:35 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.804 11:49:35 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.804 11:49:35 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:20:38.804 11:49:35 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:38.804 11:49:35 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:20:38.804 11:49:35 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.804 11:49:35 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:39.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:39.321 Waiting for block devices as requested 00:20:39.321 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.321 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.580 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.580 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:44.890 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:44.890 11:49:41 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:44.890 11:49:41 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:44.890 11:49:41 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:44.890 11:49:41 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:44.890 11:49:41 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:44.890 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.891 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:44.892 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.893 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:44.894 11:49:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:44.894 11:49:41 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:44.894 11:49:41 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:44.895 11:49:41 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:44.895 11:49:41 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.895 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.896 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.897 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:44.898 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:44.899 11:49:41 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:44.899 11:49:41 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:20:44.899 11:49:41 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:44.899 11:49:41 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.899 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.900 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.901 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.902 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:44.903 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.165 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.166 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.167 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:45.168 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:45.169 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.170 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:45.171 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:45.172 11:49:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:45.172 11:49:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:20:45.172 11:49:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:45.172 11:49:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.172 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.173 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:45.174 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:45.175 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:45.176 11:49:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:45.176 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:20:45.177 11:49:42 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:20:45.177 11:49:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:20:45.177 11:49:42 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:20:45.177 11:49:42 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:45.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:46.311 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.311 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.311 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.311 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:46.568 11:49:43 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:46.568 11:49:43 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:46.568 11:49:43 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.568 11:49:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:46.568 ************************************ 00:20:46.568 START TEST nvme_flexible_data_placement 00:20:46.568 ************************************ 00:20:46.568 11:49:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:46.825 Initializing NVMe Controllers 00:20:46.825 Attaching to 0000:00:13.0 00:20:46.826 Controller supports FDP Attached to 0000:00:13.0 00:20:46.826 Namespace ID: 1 Endurance Group ID: 1 00:20:46.826 Initialization complete. 00:20:46.826 00:20:46.826 ================================== 00:20:46.826 == FDP tests for Namespace: #01 == 00:20:46.826 ================================== 00:20:46.826 00:20:46.826 Get Feature: FDP: 00:20:46.826 ================= 00:20:46.826 Enabled: Yes 00:20:46.826 FDP configuration Index: 0 00:20:46.826 00:20:46.826 FDP configurations log page 00:20:46.826 =========================== 00:20:46.826 Number of FDP configurations: 1 00:20:46.826 Version: 0 00:20:46.826 Size: 112 00:20:46.826 FDP Configuration Descriptor: 0 00:20:46.826 Descriptor Size: 96 00:20:46.826 Reclaim Group Identifier format: 2 00:20:46.826 FDP Volatile Write Cache: Not Present 00:20:46.826 FDP Configuration: Valid 00:20:46.826 Vendor Specific Size: 0 00:20:46.826 Number of Reclaim Groups: 2 00:20:46.826 Number of Recalim Unit Handles: 8 00:20:46.826 Max Placement Identifiers: 128 00:20:46.826 Number of Namespaces Suppprted: 256 00:20:46.826 Reclaim unit Nominal Size: 6000000 bytes 00:20:46.826 Estimated Reclaim Unit Time Limit: Not Reported 00:20:46.826 RUH Desc #000: RUH Type: Initially Isolated 00:20:46.826 RUH Desc #001: RUH Type: Initially Isolated 00:20:46.826 RUH Desc #002: RUH Type: Initially Isolated 00:20:46.826 RUH Desc #003: RUH Type: Initially Isolated 00:20:46.826 RUH Desc #004: RUH Type: Initially Isolated 00:20:46.826 RUH Desc #005: RUH Type: Initially Isolated 00:20:46.826 RUH Desc #006: RUH Type: Initially Isolated 00:20:46.826 RUH Desc #007: RUH Type: Initially Isolated 00:20:46.826 00:20:46.826 FDP reclaim unit handle usage log page 00:20:46.826 ====================================== 00:20:46.826 Number of Reclaim Unit Handles: 8 00:20:46.826 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:20:46.826 RUH Usage Desc #001: RUH Attributes: Unused 00:20:46.826 RUH Usage Desc #002: RUH Attributes: Unused 00:20:46.826 RUH Usage Desc #003: RUH Attributes: Unused 00:20:46.826 RUH Usage Desc #004: RUH Attributes: Unused 00:20:46.826 RUH Usage Desc #005: RUH Attributes: Unused 00:20:46.826 RUH Usage Desc #006: RUH Attributes: Unused 00:20:46.826 RUH Usage Desc #007: RUH Attributes: Unused 00:20:46.826 00:20:46.826 FDP statistics log page 00:20:46.826 ======================= 00:20:46.826 Host bytes with metadata written: 811298816 00:20:46.826 Media bytes with metadata written: 811384832 00:20:46.826 Media bytes erased: 0 00:20:46.826 00:20:46.826 FDP Reclaim unit handle status 00:20:46.826 ============================== 00:20:46.826 Number of RUHS descriptors: 2 00:20:46.826 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005a49 00:20:46.826 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:20:46.826 00:20:46.826 FDP write on placement id: 0 success 00:20:46.826 00:20:46.826 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:20:46.826 00:20:46.826 IO mgmt send: RUH update for Placement ID: #0 Success 00:20:46.826 00:20:46.826 Get Feature: FDP Events for Placement handle: #0 00:20:46.826 ======================== 00:20:46.826 Number of FDP Events: 6 00:20:46.826 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:20:46.826 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:20:46.826 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:20:46.826 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:20:46.826 FDP Event: #4 Type: Media Reallocated Enabled: No 00:20:46.826 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:20:46.826 00:20:46.826 FDP events log page 00:20:46.826 =================== 00:20:46.826 Number of FDP events: 1 00:20:46.826 FDP Event #0: 00:20:46.826 Event Type: RU Not Written to Capacity 00:20:46.826 Placement Identifier: Valid 00:20:46.826 NSID: Valid 00:20:46.826 Location: Valid 00:20:46.826 Placement Identifier: 0 00:20:46.826 Event Timestamp: b 00:20:46.826 Namespace Identifier: 1 00:20:46.826 Reclaim Group Identifier: 0 00:20:46.826 Reclaim Unit Handle Identifier: 0 00:20:46.826 00:20:46.826 FDP test passed 00:20:46.826 00:20:46.826 real 0m0.343s 00:20:46.826 user 0m0.130s 00:20:46.826 sys 0m0.109s 00:20:46.826 11:49:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.826 ************************************ 00:20:46.826 END TEST nvme_flexible_data_placement 00:20:46.826 ************************************ 00:20:46.826 11:49:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:20:46.826 ************************************ 00:20:46.826 END TEST nvme_fdp 00:20:46.826 ************************************ 00:20:46.826 00:20:46.826 real 0m8.200s 00:20:46.826 user 0m1.333s 00:20:46.826 sys 0m1.726s 00:20:46.826 11:49:43 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.826 11:49:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:46.826 11:49:43 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:20:46.826 11:49:43 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:46.826 11:49:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:46.826 11:49:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.826 11:49:43 -- common/autotest_common.sh@10 -- # set +x 00:20:46.826 ************************************ 00:20:46.826 START TEST nvme_rpc 00:20:46.826 ************************************ 00:20:46.826 11:49:43 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:47.085 * Looking for test storage... 00:20:47.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:47.085 11:49:43 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.085 11:49:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:47.085 11:49:43 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:47.085 11:49:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:20:47.085 11:49:44 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71183 00:20:47.085 11:49:44 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:47.085 11:49:44 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:20:47.085 11:49:44 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71183 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 71183 ']' 00:20:47.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.085 11:49:44 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.085 [2024-07-25 11:49:44.109488] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:47.085 [2024-07-25 11:49:44.109885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71183 ] 00:20:47.343 [2024-07-25 11:49:44.279058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:47.602 [2024-07-25 11:49:44.498161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.602 [2024-07-25 11:49:44.498170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.538 11:49:45 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.538 11:49:45 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:20:48.538 11:49:45 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:20:48.538 Nvme0n1 00:20:48.538 11:49:45 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:20:48.538 11:49:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:20:48.796 request: 00:20:48.796 { 00:20:48.796 "bdev_name": "Nvme0n1", 00:20:48.796 "filename": "non_existing_file", 00:20:48.796 "method": "bdev_nvme_apply_firmware", 00:20:48.796 "req_id": 1 00:20:48.796 } 00:20:48.796 Got JSON-RPC error response 00:20:48.796 response: 00:20:48.796 { 00:20:48.796 "code": -32603, 00:20:48.796 "message": "open file failed." 00:20:48.796 } 00:20:49.055 11:49:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:20:49.055 11:49:45 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:20:49.055 11:49:45 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:49.313 11:49:46 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:49.313 11:49:46 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71183 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 71183 ']' 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 71183 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71183 00:20:49.313 killing process with pid 71183 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71183' 00:20:49.313 11:49:46 nvme_rpc -- common/autotest_common.sh@969 -- # kill 71183 00:20:49.314 11:49:46 nvme_rpc -- common/autotest_common.sh@974 -- # wait 71183 00:20:51.243 ************************************ 00:20:51.243 END TEST nvme_rpc 00:20:51.243 ************************************ 00:20:51.243 00:20:51.243 real 0m4.312s 00:20:51.243 user 0m8.301s 00:20:51.243 sys 0m0.614s 00:20:51.243 11:49:48 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:51.243 11:49:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:51.243 11:49:48 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:51.244 11:49:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:51.244 11:49:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:51.244 11:49:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.244 ************************************ 00:20:51.244 START TEST nvme_rpc_timeouts 00:20:51.244 ************************************ 00:20:51.244 11:49:48 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:51.502 * Looking for test storage... 00:20:51.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:51.502 11:49:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:51.502 11:49:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71261 00:20:51.502 11:49:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71261 00:20:51.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.502 11:49:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71285 00:20:51.502 11:49:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:20:51.502 11:49:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:51.502 11:49:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71285 00:20:51.502 11:49:48 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 71285 ']' 00:20:51.502 11:49:48 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.502 11:49:48 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.502 11:49:48 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.502 11:49:48 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.502 11:49:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:51.502 [2024-07-25 11:49:48.427929] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:51.502 [2024-07-25 11:49:48.428340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71285 ] 00:20:51.760 [2024-07-25 11:49:48.606937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:52.019 [2024-07-25 11:49:48.836555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.019 [2024-07-25 11:49:48.836563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.586 11:49:49 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.586 Checking default timeout settings: 00:20:52.586 11:49:49 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:20:52.586 11:49:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:20:52.586 11:49:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:53.154 Making settings changes with rpc: 00:20:53.154 11:49:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:20:53.154 11:49:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:20:53.413 Check default vs. modified settings: 00:20:53.413 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:20:53.413 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:53.671 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:20:53.671 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:53.671 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71261 00:20:53.671 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:53.671 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:53.671 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71261 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:53.672 Setting action_on_timeout is changed as expected. 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71261 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71261 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:53.672 Setting timeout_us is changed as expected. 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71261 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71261 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:53.672 Setting timeout_admin_us is changed as expected. 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71261 /tmp/settings_modified_71261 00:20:53.672 11:49:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71285 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 71285 ']' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 71285 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71285 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:53.672 killing process with pid 71285 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71285' 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 71285 00:20:53.672 11:49:50 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 71285 00:20:56.204 RPC TIMEOUT SETTING TEST PASSED. 00:20:56.204 11:49:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:20:56.204 00:20:56.204 real 0m4.503s 00:20:56.204 user 0m8.656s 00:20:56.204 sys 0m0.587s 00:20:56.204 ************************************ 00:20:56.204 END TEST nvme_rpc_timeouts 00:20:56.204 ************************************ 00:20:56.204 11:49:52 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.204 11:49:52 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:56.204 11:49:52 -- spdk/autotest.sh@247 -- # uname -s 00:20:56.204 11:49:52 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:20:56.204 11:49:52 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:20:56.204 11:49:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:56.204 11:49:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:56.204 11:49:52 -- common/autotest_common.sh@10 -- # set +x 00:20:56.204 ************************************ 00:20:56.204 START TEST sw_hotplug 00:20:56.204 ************************************ 00:20:56.204 11:49:52 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:20:56.204 * Looking for test storage... 00:20:56.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:56.204 11:49:52 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:56.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.465 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.465 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.465 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.465 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.465 11:49:53 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:20:56.465 11:49:53 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:20:56.465 11:49:53 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:20:56.465 11:49:53 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@230 -- # local class 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:56.465 11:49:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:20:56.466 11:49:53 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:56.466 11:49:53 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:20:56.466 11:49:53 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:20:56.466 11:49:53 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:56.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:57.002 Waiting for block devices as requested 00:20:57.002 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.260 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.260 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.260 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.524 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:02.524 11:49:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:21:02.524 11:49:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:02.782 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:21:03.040 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.040 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:21:03.298 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:21:03.556 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.556 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:21:03.556 11:50:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72148 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:21:03.556 11:50:00 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:21:03.556 11:50:00 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:21:03.556 11:50:00 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:21:03.556 11:50:00 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:21:03.556 11:50:00 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:03.556 11:50:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:21:03.815 Initializing NVMe Controllers 00:21:03.815 Attaching to 0000:00:10.0 00:21:03.815 Attaching to 0000:00:11.0 00:21:03.815 Attached to 0000:00:10.0 00:21:03.815 Attached to 0000:00:11.0 00:21:03.815 Initialization complete. Starting I/O... 00:21:03.815 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:21:03.815 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:21:03.815 00:21:05.190 QEMU NVMe Ctrl (12340 ): 1118 I/Os completed (+1118) 00:21:05.190 QEMU NVMe Ctrl (12341 ): 1194 I/Os completed (+1194) 00:21:05.190 00:21:06.126 QEMU NVMe Ctrl (12340 ): 2710 I/Os completed (+1592) 00:21:06.126 QEMU NVMe Ctrl (12341 ): 2850 I/Os completed (+1656) 00:21:06.126 00:21:07.061 QEMU NVMe Ctrl (12340 ): 4518 I/Os completed (+1808) 00:21:07.061 QEMU NVMe Ctrl (12341 ): 4757 I/Os completed (+1907) 00:21:07.061 00:21:07.995 QEMU NVMe Ctrl (12340 ): 6168 I/Os completed (+1650) 00:21:07.995 QEMU NVMe Ctrl (12341 ): 6480 I/Os completed (+1723) 00:21:07.995 00:21:08.929 QEMU NVMe Ctrl (12340 ): 7782 I/Os completed (+1614) 00:21:08.929 QEMU NVMe Ctrl (12341 ): 8258 I/Os completed (+1778) 00:21:08.929 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:09.864 [2024-07-25 11:50:06.570474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:09.864 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:09.864 [2024-07-25 11:50:06.572607] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.572826] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.573011] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.573181] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:09.864 [2024-07-25 11:50:06.576268] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.576442] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.576612] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.576782] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:09.864 [2024-07-25 11:50:06.598174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:09.864 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:09.864 [2024-07-25 11:50:06.600027] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.600087] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.600121] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.600145] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:09.864 [2024-07-25 11:50:06.602765] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.602816] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.602845] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 [2024-07-25 11:50:06.602867] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:09.864 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:21:09.864 EAL: Scan for (pci) bus failed. 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:09.864 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:09.864 Attaching to 0000:00:10.0 00:21:09.864 Attached to 0000:00:10.0 00:21:09.864 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:10.123 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:10.123 11:50:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:10.123 Attaching to 0000:00:11.0 00:21:10.123 Attached to 0000:00:11.0 00:21:11.088 QEMU NVMe Ctrl (12340 ): 1585 I/Os completed (+1585) 00:21:11.088 QEMU NVMe Ctrl (12341 ): 1559 I/Os completed (+1559) 00:21:11.088 00:21:12.024 QEMU NVMe Ctrl (12340 ): 3305 I/Os completed (+1720) 00:21:12.024 QEMU NVMe Ctrl (12341 ): 3386 I/Os completed (+1827) 00:21:12.024 00:21:12.957 QEMU NVMe Ctrl (12340 ): 5083 I/Os completed (+1778) 00:21:12.957 QEMU NVMe Ctrl (12341 ): 5240 I/Os completed (+1854) 00:21:12.957 00:21:13.892 QEMU NVMe Ctrl (12340 ): 6655 I/Os completed (+1572) 00:21:13.892 QEMU NVMe Ctrl (12341 ): 7083 I/Os completed (+1843) 00:21:13.892 00:21:14.827 QEMU NVMe Ctrl (12340 ): 8242 I/Os completed (+1587) 00:21:14.827 QEMU NVMe Ctrl (12341 ): 8859 I/Os completed (+1776) 00:21:14.827 00:21:16.203 QEMU NVMe Ctrl (12340 ): 9834 I/Os completed (+1592) 00:21:16.203 QEMU NVMe Ctrl (12341 ): 10620 I/Os completed (+1761) 00:21:16.203 00:21:16.810 QEMU NVMe Ctrl (12340 ): 11614 I/Os completed (+1780) 00:21:16.810 QEMU NVMe Ctrl (12341 ): 12518 I/Os completed (+1898) 00:21:16.810 00:21:18.183 QEMU NVMe Ctrl (12340 ): 13238 I/Os completed (+1624) 00:21:18.183 QEMU NVMe Ctrl (12341 ): 14294 I/Os completed (+1776) 00:21:18.183 00:21:19.113 QEMU NVMe Ctrl (12340 ): 14802 I/Os completed (+1564) 00:21:19.113 QEMU NVMe Ctrl (12341 ): 16039 I/Os completed (+1745) 00:21:19.113 00:21:20.044 QEMU NVMe Ctrl (12340 ): 16438 I/Os completed (+1636) 00:21:20.044 QEMU NVMe Ctrl (12341 ): 17810 I/Os completed (+1771) 00:21:20.044 00:21:21.003 QEMU NVMe Ctrl (12340 ): 18003 I/Os completed (+1565) 00:21:21.003 QEMU NVMe Ctrl (12341 ): 19576 I/Os completed (+1766) 00:21:21.003 00:21:21.938 QEMU NVMe Ctrl (12340 ): 19669 I/Os completed (+1666) 00:21:21.938 QEMU NVMe Ctrl (12341 ): 21404 I/Os completed (+1828) 00:21:21.938 00:21:21.938 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:21.938 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:21.938 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:21.938 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:21.938 [2024-07-25 11:50:18.924813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:21.938 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:21.938 [2024-07-25 11:50:18.926990] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 [2024-07-25 11:50:18.927275] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 [2024-07-25 11:50:18.927433] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 [2024-07-25 11:50:18.927597] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:21.938 [2024-07-25 11:50:18.930613] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 [2024-07-25 11:50:18.930806] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 [2024-07-25 11:50:18.930877] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 [2024-07-25 11:50:18.931047] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.938 EAL: Cannot open sysfs resource 00:21:21.938 EAL: pci_scan_one(): cannot parse resource 00:21:21.938 EAL: Scan for (pci) bus failed. 00:21:21.939 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:21.939 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:21.939 [2024-07-25 11:50:18.949359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:21.939 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:21.939 [2024-07-25 11:50:18.951305] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 [2024-07-25 11:50:18.951379] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 [2024-07-25 11:50:18.951415] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 [2024-07-25 11:50:18.951441] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:21.939 [2024-07-25 11:50:18.954164] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 [2024-07-25 11:50:18.954221] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 [2024-07-25 11:50:18.954257] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 [2024-07-25 11:50:18.954292] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:21.939 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:21.939 11:50:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:22.196 Attaching to 0000:00:10.0 00:21:22.196 Attached to 0000:00:10.0 00:21:22.196 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:22.454 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:22.454 11:50:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:22.454 Attaching to 0000:00:11.0 00:21:22.454 Attached to 0000:00:11.0 00:21:23.019 QEMU NVMe Ctrl (12340 ): 1164 I/Os completed (+1164) 00:21:23.019 QEMU NVMe Ctrl (12341 ): 1001 I/Os completed (+1001) 00:21:23.019 00:21:23.953 QEMU NVMe Ctrl (12340 ): 2739 I/Os completed (+1575) 00:21:23.953 QEMU NVMe Ctrl (12341 ): 2719 I/Os completed (+1718) 00:21:23.953 00:21:24.886 QEMU NVMe Ctrl (12340 ): 4405 I/Os completed (+1666) 00:21:24.886 QEMU NVMe Ctrl (12341 ): 4499 I/Os completed (+1780) 00:21:24.886 00:21:25.824 QEMU NVMe Ctrl (12340 ): 6033 I/Os completed (+1628) 00:21:25.824 QEMU NVMe Ctrl (12341 ): 6281 I/Os completed (+1782) 00:21:25.824 00:21:27.212 QEMU NVMe Ctrl (12340 ): 7755 I/Os completed (+1722) 00:21:27.212 QEMU NVMe Ctrl (12341 ): 8051 I/Os completed (+1770) 00:21:27.212 00:21:27.778 QEMU NVMe Ctrl (12340 ): 9407 I/Os completed (+1652) 00:21:27.778 QEMU NVMe Ctrl (12341 ): 9874 I/Os completed (+1823) 00:21:27.778 00:21:29.154 QEMU NVMe Ctrl (12340 ): 11143 I/Os completed (+1736) 00:21:29.154 QEMU NVMe Ctrl (12341 ): 11706 I/Os completed (+1832) 00:21:29.154 00:21:30.089 QEMU NVMe Ctrl (12340 ): 12886 I/Os completed (+1743) 00:21:30.089 QEMU NVMe Ctrl (12341 ): 13611 I/Os completed (+1905) 00:21:30.089 00:21:31.024 QEMU NVMe Ctrl (12340 ): 14512 I/Os completed (+1626) 00:21:31.024 QEMU NVMe Ctrl (12341 ): 15366 I/Os completed (+1755) 00:21:31.024 00:21:31.959 QEMU NVMe Ctrl (12340 ): 16189 I/Os completed (+1677) 00:21:31.959 QEMU NVMe Ctrl (12341 ): 17182 I/Os completed (+1816) 00:21:31.959 00:21:32.894 QEMU NVMe Ctrl (12340 ): 17912 I/Os completed (+1723) 00:21:32.894 QEMU NVMe Ctrl (12341 ): 18990 I/Os completed (+1808) 00:21:32.894 00:21:33.828 QEMU NVMe Ctrl (12340 ): 19625 I/Os completed (+1713) 00:21:33.828 QEMU NVMe Ctrl (12341 ): 20846 I/Os completed (+1856) 00:21:33.828 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:34.394 [2024-07-25 11:50:31.246229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:34.394 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:34.394 [2024-07-25 11:50:31.248326] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.248403] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.248433] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.248460] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:34.394 [2024-07-25 11:50:31.251338] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.251415] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.251443] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.251465] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:34.394 [2024-07-25 11:50:31.276953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:34.394 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:34.394 [2024-07-25 11:50:31.278711] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.278775] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.278807] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.278833] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:34.394 [2024-07-25 11:50:31.281347] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.281402] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.281432] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 [2024-07-25 11:50:31.281453] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:34.394 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:21:34.394 EAL: Scan for (pci) bus failed. 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:34.394 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:34.653 Attaching to 0000:00:10.0 00:21:34.653 Attached to 0000:00:10.0 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:34.653 11:50:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:34.653 Attaching to 0000:00:11.0 00:21:34.653 Attached to 0000:00:11.0 00:21:34.653 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:34.653 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:34.653 [2024-07-25 11:50:31.622942] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:21:46.851 11:50:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:46.851 11:50:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:46.851 11:50:43 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.05 00:21:46.851 11:50:43 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.05 00:21:46.851 11:50:43 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:21:46.851 11:50:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.05 00:21:46.851 11:50:43 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 2 00:21:46.851 remove_attach_helper took 43.05s to complete (handling 2 nvme drive(s)) 11:50:43 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72148 00:21:53.412 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72148) - No such process 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72148 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=72687 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:21:53.412 11:50:49 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 72687 00:21:53.412 11:50:49 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 72687 ']' 00:21:53.412 11:50:49 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.412 11:50:49 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.412 11:50:49 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.412 11:50:49 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.412 11:50:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:53.412 [2024-07-25 11:50:49.725797] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:53.412 [2024-07-25 11:50:49.725956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72687 ] 00:21:53.412 [2024-07-25 11:50:49.890845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.412 [2024-07-25 11:50:50.077541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:21:53.979 11:50:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:53.979 11:50:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:00.576 11:50:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.576 11:50:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:00.576 [2024-07-25 11:50:56.890569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:00.576 11:50:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.576 [2024-07-25 11:50:56.893732] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:56.893784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:56.893825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 [2024-07-25 11:50:56.893855] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:56.893876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:56.893892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 [2024-07-25 11:50:56.893910] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:56.893924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:56.893940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 [2024-07-25 11:50:56.893954] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:56.893972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:56.893986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:00.576 11:50:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:00.576 [2024-07-25 11:50:57.390576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:00.576 [2024-07-25 11:50:57.393448] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:57.393511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:57.393535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 [2024-07-25 11:50:57.393573] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:57.393590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:57.393607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 [2024-07-25 11:50:57.393623] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:57.393639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:57.393653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 [2024-07-25 11:50:57.393670] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:00.576 [2024-07-25 11:50:57.393684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.576 [2024-07-25 11:50:57.393969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:00.576 11:50:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.576 11:50:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:00.576 11:50:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:00.576 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:00.832 11:50:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:13.027 11:51:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:13.027 11:51:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:13.027 11:51:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:13.027 [2024-07-25 11:51:09.890753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:13.027 [2024-07-25 11:51:09.893417] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:13.027 [2024-07-25 11:51:09.893473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.027 [2024-07-25 11:51:09.893501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.027 [2024-07-25 11:51:09.893530] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.027 [2024-07-25 11:51:09.893549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.027 [2024-07-25 11:51:09.893564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.027 [2024-07-25 11:51:09.893582] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.027 [2024-07-25 11:51:09.893596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.027 [2024-07-25 11:51:09.893612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.027 [2024-07-25 11:51:09.893626] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.027 [2024-07-25 11:51:09.893642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.027 [2024-07-25 11:51:09.893656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.027 11:51:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.027 11:51:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:13.027 11:51:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:13.027 11:51:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:13.285 [2024-07-25 11:51:10.290760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:13.285 [2024-07-25 11:51:10.293392] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.286 [2024-07-25 11:51:10.293481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.286 [2024-07-25 11:51:10.293505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.286 [2024-07-25 11:51:10.293538] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.286 [2024-07-25 11:51:10.293555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.286 [2024-07-25 11:51:10.293572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.286 [2024-07-25 11:51:10.293587] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.286 [2024-07-25 11:51:10.293638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.286 [2024-07-25 11:51:10.293653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.286 [2024-07-25 11:51:10.293670] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.286 [2024-07-25 11:51:10.293684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.286 [2024-07-25 11:51:10.293700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.544 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:22:13.544 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:13.544 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:13.544 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:13.544 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:13.545 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:13.545 11:51:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.545 11:51:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:13.545 11:51:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.545 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:13.545 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:13.804 11:51:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:26.013 11:51:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.013 11:51:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:26.013 11:51:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:26.013 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:26.013 [2024-07-25 11:51:22.891046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:26.013 [2024-07-25 11:51:22.894663] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.013 [2024-07-25 11:51:22.894862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.013 [2024-07-25 11:51:22.895111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.013 [2024-07-25 11:51:22.895299] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.013 [2024-07-25 11:51:22.895524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.013 [2024-07-25 11:51:22.895666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.013 [2024-07-25 11:51:22.895769] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.013 [2024-07-25 11:51:22.895820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.013 [2024-07-25 11:51:22.895954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.013 [2024-07-25 11:51:22.896022] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.013 [2024-07-25 11:51:22.896069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.013 [2024-07-25 11:51:22.896200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:26.014 11:51:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.014 11:51:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:26.014 11:51:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:26.014 11:51:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:26.579 [2024-07-25 11:51:23.391068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:26.579 [2024-07-25 11:51:23.394061] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.579 [2024-07-25 11:51:23.394310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.579 [2024-07-25 11:51:23.394525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.579 [2024-07-25 11:51:23.394934] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.579 [2024-07-25 11:51:23.395112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.579 [2024-07-25 11:51:23.395293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.579 [2024-07-25 11:51:23.395570] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.579 [2024-07-25 11:51:23.395802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.579 [2024-07-25 11:51:23.395989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.579 [2024-07-25 11:51:23.396235] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:26.579 [2024-07-25 11:51:23.396424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.579 [2024-07-25 11:51:23.396600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:26.579 11:51:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.579 11:51:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:26.579 11:51:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:26.579 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:26.837 11:51:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.11 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.11 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.11 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.11 2 00:22:39.047 remove_attach_helper took 45.11s to complete (handling 2 nvme drive(s)) 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:22:39.047 11:51:35 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:39.047 11:51:35 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:45.606 11:51:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:45.606 11:51:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.606 11:51:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:45.606 [2024-07-25 11:51:42.026857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:45.606 [2024-07-25 11:51:42.028930] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.028968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.028991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 [2024-07-25 11:51:42.029017] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.029034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.029048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 [2024-07-25 11:51:42.029064] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.029076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.029091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 [2024-07-25 11:51:42.029105] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.029119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 11:51:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:45.606 [2024-07-25 11:51:42.426855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:45.606 [2024-07-25 11:51:42.429004] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.429077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.429101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 [2024-07-25 11:51:42.429131] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.429147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.429164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 [2024-07-25 11:51:42.429180] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.429196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.429210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 [2024-07-25 11:51:42.429227] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:45.606 [2024-07-25 11:51:42.429241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.606 [2024-07-25 11:51:42.429257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:45.606 11:51:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.606 11:51:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:45.606 11:51:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:45.606 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:45.864 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:46.122 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:46.122 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:46.122 11:51:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:58.348 11:51:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.348 11:51:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:58.348 11:51:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:58.348 11:51:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:58.348 [2024-07-25 11:51:55.027226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:58.348 [2024-07-25 11:51:55.029308] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.348 [2024-07-25 11:51:55.029421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.348 [2024-07-25 11:51:55.029523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.348 [2024-07-25 11:51:55.029795] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.348 [2024-07-25 11:51:55.030003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.348 [2024-07-25 11:51:55.030154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.348 [2024-07-25 11:51:55.030308] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.348 [2024-07-25 11:51:55.030525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.348 [2024-07-25 11:51:55.030711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.348 [2024-07-25 11:51:55.030799] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.348 [2024-07-25 11:51:55.030905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.348 [2024-07-25 11:51:55.031059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:58.348 11:51:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.348 11:51:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:58.348 11:51:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:58.348 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:58.607 [2024-07-25 11:51:55.427190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:58.607 [2024-07-25 11:51:55.429009] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.607 [2024-07-25 11:51:55.429095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.607 [2024-07-25 11:51:55.429118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.607 [2024-07-25 11:51:55.429147] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.607 [2024-07-25 11:51:55.429165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.607 [2024-07-25 11:51:55.429199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.607 [2024-07-25 11:51:55.429215] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.607 [2024-07-25 11:51:55.429231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.607 [2024-07-25 11:51:55.429245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.607 [2024-07-25 11:51:55.429261] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:58.607 [2024-07-25 11:51:55.429285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.607 [2024-07-25 11:51:55.429311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.607 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:22:58.607 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:58.607 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:58.607 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:58.607 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:58.607 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:58.607 11:51:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.607 11:51:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:58.607 11:51:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:58.866 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:59.124 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:59.124 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:59.124 11:51:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:11.329 11:52:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.329 11:52:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:11.329 11:52:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:11.329 11:52:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:11.329 [2024-07-25 11:52:08.027395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:11.329 [2024-07-25 11:52:08.029651] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.329 [2024-07-25 11:52:08.029910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.329 [2024-07-25 11:52:08.030153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.329 [2024-07-25 11:52:08.030362] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.329 [2024-07-25 11:52:08.030525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.329 [2024-07-25 11:52:08.030719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.329 [2024-07-25 11:52:08.030943] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.329 [2024-07-25 11:52:08.031120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.329 [2024-07-25 11:52:08.031304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.329 [2024-07-25 11:52:08.031468] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.329 [2024-07-25 11:52:08.031669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.329 [2024-07-25 11:52:08.031848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:11.329 11:52:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.329 11:52:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:11.329 11:52:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:23:11.329 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:11.588 [2024-07-25 11:52:08.527436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:11.588 [2024-07-25 11:52:08.530289] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.588 [2024-07-25 11:52:08.530531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.588 [2024-07-25 11:52:08.530735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.588 [2024-07-25 11:52:08.530919] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.588 [2024-07-25 11:52:08.531188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.588 [2024-07-25 11:52:08.531381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.588 [2024-07-25 11:52:08.531575] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.588 [2024-07-25 11:52:08.531834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.588 [2024-07-25 11:52:08.532000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.588 [2024-07-25 11:52:08.532162] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:11.588 [2024-07-25 11:52:08.532386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.588 [2024-07-25 11:52:08.532604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.588 [2024-07-25 11:52:08.532877] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:23:11.588 [2024-07-25 11:52:08.533050] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:23:11.588 [2024-07-25 11:52:08.533188] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:23:11.588 [2024-07-25 11:52:08.533245] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:23:11.588 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:23:11.588 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:11.588 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:11.588 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:11.588 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:11.588 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:11.588 11:52:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.588 11:52:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:11.588 11:52:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:11.847 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:12.105 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:12.105 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:12.105 11:52:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:24.341 11:52:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:24.341 11:52:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:24.341 11:52:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:24.341 11:52:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:24.341 11:52:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:24.341 11:52:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:24.341 11:52:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.341 11:52:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:24.341 11:52:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.341 11:52:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:24.341 11:52:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.07 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.07 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:23:24.341 11:52:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.07 00:23:24.341 11:52:21 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.07 2 00:23:24.341 remove_attach_helper took 45.07s to complete (handling 2 nvme drive(s)) 11:52:21 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:23:24.341 11:52:21 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 72687 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 72687 ']' 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 72687 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72687 00:23:24.341 killing process with pid 72687 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72687' 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@969 -- # kill 72687 00:23:24.341 11:52:21 sw_hotplug -- common/autotest_common.sh@974 -- # wait 72687 00:23:26.251 11:52:23 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:26.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:27.076 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:27.076 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:27.076 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:27.076 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:27.076 00:23:27.076 real 2m31.305s 00:23:27.076 user 1m51.352s 00:23:27.076 sys 0m19.723s 00:23:27.076 11:52:24 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:27.076 11:52:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:27.076 ************************************ 00:23:27.076 END TEST sw_hotplug 00:23:27.076 ************************************ 00:23:27.335 11:52:24 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:23:27.335 11:52:24 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:27.335 11:52:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:27.335 11:52:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.335 11:52:24 -- common/autotest_common.sh@10 -- # set +x 00:23:27.335 ************************************ 00:23:27.335 START TEST nvme_xnvme 00:23:27.335 ************************************ 00:23:27.335 11:52:24 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:27.335 * Looking for test storage... 00:23:27.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:27.335 11:52:24 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.335 11:52:24 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.335 11:52:24 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.335 11:52:24 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.335 11:52:24 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.335 11:52:24 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.335 11:52:24 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.335 11:52:24 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:23:27.335 11:52:24 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.335 11:52:24 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:23:27.335 11:52:24 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:27.335 11:52:24 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.335 11:52:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:27.335 ************************************ 00:23:27.335 START TEST xnvme_to_malloc_dd_copy 00:23:27.335 ************************************ 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:23:27.335 11:52:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:23:27.335 { 00:23:27.335 "subsystems": [ 00:23:27.335 { 00:23:27.335 "subsystem": "bdev", 00:23:27.335 "config": [ 00:23:27.335 { 00:23:27.335 "params": { 00:23:27.335 "block_size": 512, 00:23:27.335 "num_blocks": 2097152, 00:23:27.335 "name": "malloc0" 00:23:27.335 }, 00:23:27.335 "method": "bdev_malloc_create" 00:23:27.335 }, 00:23:27.335 { 00:23:27.335 "params": { 00:23:27.335 "io_mechanism": "libaio", 00:23:27.335 "filename": "/dev/nullb0", 00:23:27.335 "name": "null0" 00:23:27.335 }, 00:23:27.335 "method": "bdev_xnvme_create" 00:23:27.335 }, 00:23:27.335 { 00:23:27.335 "method": "bdev_wait_for_examine" 00:23:27.335 } 00:23:27.335 ] 00:23:27.335 } 00:23:27.335 ] 00:23:27.335 } 00:23:27.335 [2024-07-25 11:52:24.357220] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:27.335 [2024-07-25 11:52:24.358219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74036 ] 00:23:27.593 [2024-07-25 11:52:24.533637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.850 [2024-07-25 11:52:24.757119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.103  Copying: 172/1024 [MB] (172 MBps) Copying: 352/1024 [MB] (180 MBps) Copying: 543/1024 [MB] (190 MBps) Copying: 730/1024 [MB] (187 MBps) Copying: 916/1024 [MB] (186 MBps) Copying: 1024/1024 [MB] (average 184 MBps) 00:23:38.103 00:23:38.103 11:52:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:23:38.103 11:52:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:23:38.103 11:52:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:23:38.103 11:52:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:23:38.103 { 00:23:38.103 "subsystems": [ 00:23:38.103 { 00:23:38.103 "subsystem": "bdev", 00:23:38.103 "config": [ 00:23:38.103 { 00:23:38.103 "params": { 00:23:38.103 "block_size": 512, 00:23:38.103 "num_blocks": 2097152, 00:23:38.103 "name": "malloc0" 00:23:38.103 }, 00:23:38.103 "method": "bdev_malloc_create" 00:23:38.103 }, 00:23:38.103 { 00:23:38.103 "params": { 00:23:38.103 "io_mechanism": "libaio", 00:23:38.103 "filename": "/dev/nullb0", 00:23:38.103 "name": "null0" 00:23:38.103 }, 00:23:38.103 "method": "bdev_xnvme_create" 00:23:38.103 }, 00:23:38.103 { 00:23:38.103 "method": "bdev_wait_for_examine" 00:23:38.103 } 00:23:38.103 ] 00:23:38.103 } 00:23:38.103 ] 00:23:38.103 } 00:23:38.103 [2024-07-25 11:52:34.818965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:38.103 [2024-07-25 11:52:34.819151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74152 ] 00:23:38.103 [2024-07-25 11:52:34.984272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.385 [2024-07-25 11:52:35.159051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.031  Copying: 195/1024 [MB] (195 MBps) Copying: 388/1024 [MB] (193 MBps) Copying: 583/1024 [MB] (194 MBps) Copying: 777/1024 [MB] (194 MBps) Copying: 962/1024 [MB] (184 MBps) Copying: 1024/1024 [MB] (average 192 MBps) 00:23:48.031 00:23:48.031 11:52:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:23:48.031 11:52:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:23:48.031 11:52:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:23:48.031 11:52:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:23:48.031 11:52:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:23:48.031 11:52:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:23:48.031 { 00:23:48.031 "subsystems": [ 00:23:48.031 { 00:23:48.031 "subsystem": "bdev", 00:23:48.031 "config": [ 00:23:48.031 { 00:23:48.031 "params": { 00:23:48.031 "block_size": 512, 00:23:48.031 "num_blocks": 2097152, 00:23:48.031 "name": "malloc0" 00:23:48.031 }, 00:23:48.031 "method": "bdev_malloc_create" 00:23:48.031 }, 00:23:48.031 { 00:23:48.031 "params": { 00:23:48.031 "io_mechanism": "io_uring", 00:23:48.031 "filename": "/dev/nullb0", 00:23:48.031 "name": "null0" 00:23:48.031 }, 00:23:48.031 "method": "bdev_xnvme_create" 00:23:48.031 }, 00:23:48.031 { 00:23:48.031 "method": "bdev_wait_for_examine" 00:23:48.031 } 00:23:48.031 ] 00:23:48.031 } 00:23:48.031 ] 00:23:48.031 } 00:23:48.031 [2024-07-25 11:52:45.046865] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:48.031 [2024-07-25 11:52:45.047080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74272 ] 00:23:48.290 [2024-07-25 11:52:45.223595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.548 [2024-07-25 11:52:45.412557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.794  Copying: 186/1024 [MB] (186 MBps) Copying: 371/1024 [MB] (184 MBps) Copying: 549/1024 [MB] (178 MBps) Copying: 731/1024 [MB] (182 MBps) Copying: 914/1024 [MB] (182 MBps) Copying: 1024/1024 [MB] (average 182 MBps) 00:23:58.794 00:23:58.794 11:52:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:23:58.794 11:52:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:23:58.794 11:52:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:23:58.794 11:52:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:23:58.794 { 00:23:58.794 "subsystems": [ 00:23:58.794 { 00:23:58.794 "subsystem": "bdev", 00:23:58.794 "config": [ 00:23:58.794 { 00:23:58.794 "params": { 00:23:58.794 "block_size": 512, 00:23:58.794 "num_blocks": 2097152, 00:23:58.794 "name": "malloc0" 00:23:58.794 }, 00:23:58.794 "method": "bdev_malloc_create" 00:23:58.794 }, 00:23:58.794 { 00:23:58.794 "params": { 00:23:58.794 "io_mechanism": "io_uring", 00:23:58.794 "filename": "/dev/nullb0", 00:23:58.794 "name": "null0" 00:23:58.794 }, 00:23:58.794 "method": "bdev_xnvme_create" 00:23:58.794 }, 00:23:58.794 { 00:23:58.794 "method": "bdev_wait_for_examine" 00:23:58.794 } 00:23:58.794 ] 00:23:58.794 } 00:23:58.794 ] 00:23:58.794 } 00:23:58.794 [2024-07-25 11:52:55.770070] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:58.794 [2024-07-25 11:52:55.770220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74391 ] 00:23:59.053 [2024-07-25 11:52:55.937505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.313 [2024-07-25 11:52:56.169219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.375  Copying: 181/1024 [MB] (181 MBps) Copying: 365/1024 [MB] (184 MBps) Copying: 547/1024 [MB] (182 MBps) Copying: 728/1024 [MB] (180 MBps) Copying: 912/1024 [MB] (184 MBps) Copying: 1024/1024 [MB] (average 182 MBps) 00:24:10.375 00:24:10.375 11:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:24:10.375 11:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:24:10.375 ************************************ 00:24:10.375 END TEST xnvme_to_malloc_dd_copy 00:24:10.375 ************************************ 00:24:10.375 00:24:10.375 real 0m42.354s 00:24:10.375 user 0m37.175s 00:24:10.375 sys 0m4.583s 00:24:10.375 11:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:10.375 11:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:24:10.375 11:53:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:24:10.375 11:53:06 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:10.375 11:53:06 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:10.375 11:53:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:10.375 ************************************ 00:24:10.375 START TEST xnvme_bdevperf 00:24:10.375 ************************************ 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:10.375 11:53:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.375 { 00:24:10.375 "subsystems": [ 00:24:10.375 { 00:24:10.375 "subsystem": "bdev", 00:24:10.375 "config": [ 00:24:10.375 { 00:24:10.375 "params": { 00:24:10.375 "io_mechanism": "libaio", 00:24:10.375 "filename": "/dev/nullb0", 00:24:10.375 "name": "null0" 00:24:10.375 }, 00:24:10.375 "method": "bdev_xnvme_create" 00:24:10.375 }, 00:24:10.375 { 00:24:10.375 "method": "bdev_wait_for_examine" 00:24:10.375 } 00:24:10.375 ] 00:24:10.375 } 00:24:10.375 ] 00:24:10.375 } 00:24:10.375 [2024-07-25 11:53:06.757123] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:10.375 [2024-07-25 11:53:06.757288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74535 ] 00:24:10.375 [2024-07-25 11:53:06.923609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.375 [2024-07-25 11:53:07.147939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.633 Running I/O for 5 seconds... 00:24:15.897 00:24:15.897 Latency(us) 00:24:15.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.897 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:15.897 null0 : 5.00 112365.50 438.93 0.00 0.00 566.06 165.70 923.46 00:24:15.897 =================================================================================================================== 00:24:15.897 Total : 112365.50 438.93 0.00 0.00 566.06 165.70 923.46 00:24:16.860 11:53:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:24:16.860 11:53:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:24:16.860 11:53:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:24:16.860 11:53:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:24:16.860 11:53:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:16.860 11:53:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:16.860 { 00:24:16.860 "subsystems": [ 00:24:16.860 { 00:24:16.860 "subsystem": "bdev", 00:24:16.860 "config": [ 00:24:16.860 { 00:24:16.860 "params": { 00:24:16.860 "io_mechanism": "io_uring", 00:24:16.860 "filename": "/dev/nullb0", 00:24:16.860 "name": "null0" 00:24:16.860 }, 00:24:16.860 "method": "bdev_xnvme_create" 00:24:16.860 }, 00:24:16.860 { 00:24:16.860 "method": "bdev_wait_for_examine" 00:24:16.860 } 00:24:16.860 ] 00:24:16.860 } 00:24:16.860 ] 00:24:16.860 } 00:24:16.860 [2024-07-25 11:53:13.703133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:16.861 [2024-07-25 11:53:13.703297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74615 ] 00:24:16.861 [2024-07-25 11:53:13.874798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.119 [2024-07-25 11:53:14.059705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.377 Running I/O for 5 seconds... 00:24:22.642 00:24:22.642 Latency(us) 00:24:22.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.642 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:22.642 null0 : 5.00 151042.40 590.01 0.00 0.00 420.37 245.76 618.12 00:24:22.642 =================================================================================================================== 00:24:22.642 Total : 151042.40 590.01 0.00 0.00 420.37 245.76 618.12 00:24:23.576 11:53:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:24:23.576 11:53:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:24:23.576 ************************************ 00:24:23.576 END TEST xnvme_bdevperf 00:24:23.576 ************************************ 00:24:23.576 00:24:23.576 real 0m13.887s 00:24:23.576 user 0m10.802s 00:24:23.576 sys 0m2.848s 00:24:23.576 11:53:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:23.576 11:53:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:23.576 ************************************ 00:24:23.576 END TEST nvme_xnvme 00:24:23.576 ************************************ 00:24:23.576 00:24:23.576 real 0m56.429s 00:24:23.576 user 0m48.047s 00:24:23.576 sys 0m7.542s 00:24:23.576 11:53:20 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:23.576 11:53:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:23.576 11:53:20 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:23.576 11:53:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:23.576 11:53:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:23.576 11:53:20 -- common/autotest_common.sh@10 -- # set +x 00:24:23.835 ************************************ 00:24:23.835 START TEST blockdev_xnvme 00:24:23.835 ************************************ 00:24:23.835 11:53:20 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:23.835 * Looking for test storage... 00:24:23.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74755 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74755 00:24:23.835 11:53:20 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:23.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.835 11:53:20 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 74755 ']' 00:24:23.835 11:53:20 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.835 11:53:20 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.835 11:53:20 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.835 11:53:20 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.835 11:53:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:23.835 [2024-07-25 11:53:20.805847] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:23.835 [2024-07-25 11:53:20.806008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74755 ] 00:24:24.093 [2024-07-25 11:53:20.968544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.351 [2024-07-25 11:53:21.208647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.917 11:53:21 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.917 11:53:21 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:24:24.917 11:53:21 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:24:24.917 11:53:21 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:24:24.917 11:53:21 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:24:24.917 11:53:21 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:24:24.917 11:53:21 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:25.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:25.432 Waiting for block devices as requested 00:24:25.432 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:25.690 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:25.690 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:25.690 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:30.948 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:30.948 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:30.948 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:30.949 nvme0n1 00:24:30.949 nvme1n1 00:24:30.949 nvme2n1 00:24:30.949 nvme2n2 00:24:30.949 nvme2n3 00:24:30.949 nvme3n1 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:30.949 11:53:27 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:24:30.949 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:24:30.950 11:53:27 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ab3d696f-8217-4a0c-8ec5-7352e2fbdca6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ab3d696f-8217-4a0c-8ec5-7352e2fbdca6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a56477bc-a82d-40fa-a895-c8f9806dd013"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a56477bc-a82d-40fa-a895-c8f9806dd013",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b34bba2e-663f-46bc-85ec-3c5071cbb5c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b34bba2e-663f-46bc-85ec-3c5071cbb5c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6616b5a9-aab5-4ee7-a24f-f14408d2169c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6616b5a9-aab5-4ee7-a24f-f14408d2169c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "64523532-f90f-42d2-82e0-82dced4e4f73"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "64523532-f90f-42d2-82e0-82dced4e4f73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6bda4b84-b6ca-4340-bf55-12892bbe6c6b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6bda4b84-b6ca-4340-bf55-12892bbe6c6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:24:31.207 11:53:28 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:24:31.207 11:53:28 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:24:31.207 11:53:28 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:24:31.207 11:53:28 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 74755 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 74755 ']' 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 74755 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74755 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:31.207 killing process with pid 74755 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74755' 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 74755 00:24:31.207 11:53:28 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 74755 00:24:33.106 11:53:30 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:33.106 11:53:30 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:33.106 11:53:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:33.106 11:53:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:33.106 11:53:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:33.106 ************************************ 00:24:33.106 START TEST bdev_hello_world 00:24:33.106 ************************************ 00:24:33.106 11:53:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:33.363 [2024-07-25 11:53:30.195110] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:33.363 [2024-07-25 11:53:30.195267] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75118 ] 00:24:33.363 [2024-07-25 11:53:30.353657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.621 [2024-07-25 11:53:30.537971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.186 [2024-07-25 11:53:30.921940] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:34.186 [2024-07-25 11:53:30.922000] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:24:34.186 [2024-07-25 11:53:30.922035] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:34.186 [2024-07-25 11:53:30.924320] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:34.186 [2024-07-25 11:53:30.924590] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:34.186 [2024-07-25 11:53:30.924633] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:34.186 [2024-07-25 11:53:30.924843] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:34.186 00:24:34.186 [2024-07-25 11:53:30.924891] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:35.118 00:24:35.118 real 0m1.929s 00:24:35.118 user 0m1.627s 00:24:35.118 sys 0m0.188s 00:24:35.118 11:53:32 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.118 ************************************ 00:24:35.118 11:53:32 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:35.118 END TEST bdev_hello_world 00:24:35.118 ************************************ 00:24:35.118 11:53:32 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:24:35.118 11:53:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:35.118 11:53:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:35.118 11:53:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:35.118 ************************************ 00:24:35.118 START TEST bdev_bounds 00:24:35.118 ************************************ 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75160 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:35.118 Process bdevio pid: 75160 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75160' 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75160 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75160 ']' 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.118 11:53:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:35.375 [2024-07-25 11:53:32.177585] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:35.375 [2024-07-25 11:53:32.177767] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75160 ] 00:24:35.375 [2024-07-25 11:53:32.349553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:35.632 [2024-07-25 11:53:32.592960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.632 [2024-07-25 11:53:32.593081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.632 [2024-07-25 11:53:32.593086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.197 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.197 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:24:36.197 11:53:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:36.197 I/O targets: 00:24:36.197 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:24:36.197 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:24:36.197 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:36.197 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:36.197 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:36.197 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:24:36.197 00:24:36.197 00:24:36.197 CUnit - A unit testing framework for C - Version 2.1-3 00:24:36.197 http://cunit.sourceforge.net/ 00:24:36.197 00:24:36.197 00:24:36.197 Suite: bdevio tests on: nvme3n1 00:24:36.197 Test: blockdev write read block ...passed 00:24:36.197 Test: blockdev write zeroes read block ...passed 00:24:36.197 Test: blockdev write zeroes read no split ...passed 00:24:36.197 Test: blockdev write zeroes read split ...passed 00:24:36.455 Test: blockdev write zeroes read split partial ...passed 00:24:36.455 Test: blockdev reset ...passed 00:24:36.455 Test: blockdev write read 8 blocks ...passed 00:24:36.455 Test: blockdev write read size > 128k ...passed 00:24:36.455 Test: blockdev write read invalid size ...passed 00:24:36.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:36.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:36.455 Test: blockdev write read max offset ...passed 00:24:36.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:36.455 Test: blockdev writev readv 8 blocks ...passed 00:24:36.455 Test: blockdev writev readv 30 x 1block ...passed 00:24:36.455 Test: blockdev writev readv block ...passed 00:24:36.455 Test: blockdev writev readv size > 128k ...passed 00:24:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:36.455 Test: blockdev comparev and writev ...passed 00:24:36.455 Test: blockdev nvme passthru rw ...passed 00:24:36.455 Test: blockdev nvme passthru vendor specific ...passed 00:24:36.455 Test: blockdev nvme admin passthru ...passed 00:24:36.455 Test: blockdev copy ...passed 00:24:36.455 Suite: bdevio tests on: nvme2n3 00:24:36.455 Test: blockdev write read block ...passed 00:24:36.455 Test: blockdev write zeroes read block ...passed 00:24:36.455 Test: blockdev write zeroes read no split ...passed 00:24:36.455 Test: blockdev write zeroes read split ...passed 00:24:36.455 Test: blockdev write zeroes read split partial ...passed 00:24:36.455 Test: blockdev reset ...passed 00:24:36.455 Test: blockdev write read 8 blocks ...passed 00:24:36.455 Test: blockdev write read size > 128k ...passed 00:24:36.455 Test: blockdev write read invalid size ...passed 00:24:36.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:36.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:36.455 Test: blockdev write read max offset ...passed 00:24:36.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:36.455 Test: blockdev writev readv 8 blocks ...passed 00:24:36.455 Test: blockdev writev readv 30 x 1block ...passed 00:24:36.455 Test: blockdev writev readv block ...passed 00:24:36.455 Test: blockdev writev readv size > 128k ...passed 00:24:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:36.455 Test: blockdev comparev and writev ...passed 00:24:36.455 Test: blockdev nvme passthru rw ...passed 00:24:36.455 Test: blockdev nvme passthru vendor specific ...passed 00:24:36.455 Test: blockdev nvme admin passthru ...passed 00:24:36.455 Test: blockdev copy ...passed 00:24:36.455 Suite: bdevio tests on: nvme2n2 00:24:36.455 Test: blockdev write read block ...passed 00:24:36.455 Test: blockdev write zeroes read block ...passed 00:24:36.455 Test: blockdev write zeroes read no split ...passed 00:24:36.455 Test: blockdev write zeroes read split ...passed 00:24:36.455 Test: blockdev write zeroes read split partial ...passed 00:24:36.455 Test: blockdev reset ...passed 00:24:36.455 Test: blockdev write read 8 blocks ...passed 00:24:36.455 Test: blockdev write read size > 128k ...passed 00:24:36.455 Test: blockdev write read invalid size ...passed 00:24:36.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:36.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:36.455 Test: blockdev write read max offset ...passed 00:24:36.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:36.455 Test: blockdev writev readv 8 blocks ...passed 00:24:36.455 Test: blockdev writev readv 30 x 1block ...passed 00:24:36.455 Test: blockdev writev readv block ...passed 00:24:36.455 Test: blockdev writev readv size > 128k ...passed 00:24:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:36.455 Test: blockdev comparev and writev ...passed 00:24:36.455 Test: blockdev nvme passthru rw ...passed 00:24:36.455 Test: blockdev nvme passthru vendor specific ...passed 00:24:36.455 Test: blockdev nvme admin passthru ...passed 00:24:36.455 Test: blockdev copy ...passed 00:24:36.455 Suite: bdevio tests on: nvme2n1 00:24:36.455 Test: blockdev write read block ...passed 00:24:36.455 Test: blockdev write zeroes read block ...passed 00:24:36.455 Test: blockdev write zeroes read no split ...passed 00:24:36.455 Test: blockdev write zeroes read split ...passed 00:24:36.455 Test: blockdev write zeroes read split partial ...passed 00:24:36.455 Test: blockdev reset ...passed 00:24:36.455 Test: blockdev write read 8 blocks ...passed 00:24:36.455 Test: blockdev write read size > 128k ...passed 00:24:36.455 Test: blockdev write read invalid size ...passed 00:24:36.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:36.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:36.455 Test: blockdev write read max offset ...passed 00:24:36.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:36.455 Test: blockdev writev readv 8 blocks ...passed 00:24:36.455 Test: blockdev writev readv 30 x 1block ...passed 00:24:36.455 Test: blockdev writev readv block ...passed 00:24:36.455 Test: blockdev writev readv size > 128k ...passed 00:24:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:36.455 Test: blockdev comparev and writev ...passed 00:24:36.455 Test: blockdev nvme passthru rw ...passed 00:24:36.456 Test: blockdev nvme passthru vendor specific ...passed 00:24:36.456 Test: blockdev nvme admin passthru ...passed 00:24:36.456 Test: blockdev copy ...passed 00:24:36.456 Suite: bdevio tests on: nvme1n1 00:24:36.456 Test: blockdev write read block ...passed 00:24:36.456 Test: blockdev write zeroes read block ...passed 00:24:36.456 Test: blockdev write zeroes read no split ...passed 00:24:36.456 Test: blockdev write zeroes read split ...passed 00:24:36.714 Test: blockdev write zeroes read split partial ...passed 00:24:36.714 Test: blockdev reset ...passed 00:24:36.714 Test: blockdev write read 8 blocks ...passed 00:24:36.714 Test: blockdev write read size > 128k ...passed 00:24:36.714 Test: blockdev write read invalid size ...passed 00:24:36.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:36.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:36.714 Test: blockdev write read max offset ...passed 00:24:36.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:36.714 Test: blockdev writev readv 8 blocks ...passed 00:24:36.714 Test: blockdev writev readv 30 x 1block ...passed 00:24:36.714 Test: blockdev writev readv block ...passed 00:24:36.714 Test: blockdev writev readv size > 128k ...passed 00:24:36.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:36.714 Test: blockdev comparev and writev ...passed 00:24:36.714 Test: blockdev nvme passthru rw ...passed 00:24:36.714 Test: blockdev nvme passthru vendor specific ...passed 00:24:36.714 Test: blockdev nvme admin passthru ...passed 00:24:36.714 Test: blockdev copy ...passed 00:24:36.714 Suite: bdevio tests on: nvme0n1 00:24:36.714 Test: blockdev write read block ...passed 00:24:36.714 Test: blockdev write zeroes read block ...passed 00:24:36.714 Test: blockdev write zeroes read no split ...passed 00:24:36.714 Test: blockdev write zeroes read split ...passed 00:24:36.714 Test: blockdev write zeroes read split partial ...passed 00:24:36.714 Test: blockdev reset ...passed 00:24:36.714 Test: blockdev write read 8 blocks ...passed 00:24:36.714 Test: blockdev write read size > 128k ...passed 00:24:36.714 Test: blockdev write read invalid size ...passed 00:24:36.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:36.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:36.714 Test: blockdev write read max offset ...passed 00:24:36.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:36.714 Test: blockdev writev readv 8 blocks ...passed 00:24:36.714 Test: blockdev writev readv 30 x 1block ...passed 00:24:36.714 Test: blockdev writev readv block ...passed 00:24:36.714 Test: blockdev writev readv size > 128k ...passed 00:24:36.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:36.714 Test: blockdev comparev and writev ...passed 00:24:36.714 Test: blockdev nvme passthru rw ...passed 00:24:36.714 Test: blockdev nvme passthru vendor specific ...passed 00:24:36.714 Test: blockdev nvme admin passthru ...passed 00:24:36.714 Test: blockdev copy ...passed 00:24:36.714 00:24:36.714 Run Summary: Type Total Ran Passed Failed Inactive 00:24:36.714 suites 6 6 n/a 0 0 00:24:36.714 tests 138 138 138 0 0 00:24:36.714 asserts 780 780 780 0 n/a 00:24:36.714 00:24:36.714 Elapsed time = 1.094 seconds 00:24:36.714 0 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75160 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75160 ']' 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75160 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75160 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:36.714 killing process with pid 75160 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75160' 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75160 00:24:36.714 11:53:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75160 00:24:38.087 11:53:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:38.087 00:24:38.087 real 0m2.677s 00:24:38.087 user 0m6.204s 00:24:38.087 sys 0m0.332s 00:24:38.087 11:53:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:38.087 11:53:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:38.087 ************************************ 00:24:38.087 END TEST bdev_bounds 00:24:38.087 ************************************ 00:24:38.087 11:53:34 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:24:38.087 11:53:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:38.087 11:53:34 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:38.087 11:53:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:38.087 ************************************ 00:24:38.087 START TEST bdev_nbd 00:24:38.087 ************************************ 00:24:38.087 11:53:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:24:38.087 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:38.087 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:38.087 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:38.087 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75215 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75215 /var/tmp/spdk-nbd.sock 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75215 ']' 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:38.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.088 11:53:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:38.088 [2024-07-25 11:53:34.913382] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:38.088 [2024-07-25 11:53:34.913564] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.088 [2024-07-25 11:53:35.099236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.345 [2024-07-25 11:53:35.284246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:38.910 11:53:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:39.168 1+0 records in 00:24:39.168 1+0 records out 00:24:39.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326044 s, 12.6 MB/s 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:39.168 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:39.425 1+0 records in 00:24:39.425 1+0 records out 00:24:39.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401825 s, 10.2 MB/s 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.425 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:39.426 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.426 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:39.426 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:39.426 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:39.426 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:39.426 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:39.683 1+0 records in 00:24:39.683 1+0 records out 00:24:39.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478084 s, 8.6 MB/s 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:39.683 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:40.038 1+0 records in 00:24:40.038 1+0 records out 00:24:40.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445055 s, 9.2 MB/s 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:40.038 11:53:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:40.316 1+0 records in 00:24:40.316 1+0 records out 00:24:40.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455141 s, 9.0 MB/s 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:40.316 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:24:40.881 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:40.882 1+0 records in 00:24:40.882 1+0 records out 00:24:40.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663908 s, 6.2 MB/s 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd0", 00:24:40.882 "bdev_name": "nvme0n1" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd1", 00:24:40.882 "bdev_name": "nvme1n1" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd2", 00:24:40.882 "bdev_name": "nvme2n1" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd3", 00:24:40.882 "bdev_name": "nvme2n2" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd4", 00:24:40.882 "bdev_name": "nvme2n3" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd5", 00:24:40.882 "bdev_name": "nvme3n1" 00:24:40.882 } 00:24:40.882 ]' 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd0", 00:24:40.882 "bdev_name": "nvme0n1" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd1", 00:24:40.882 "bdev_name": "nvme1n1" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd2", 00:24:40.882 "bdev_name": "nvme2n1" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd3", 00:24:40.882 "bdev_name": "nvme2n2" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd4", 00:24:40.882 "bdev_name": "nvme2n3" 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "nbd_device": "/dev/nbd5", 00:24:40.882 "bdev_name": "nvme3n1" 00:24:40.882 } 00:24:40.882 ]' 00:24:40.882 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:41.140 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:24:41.140 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:41.140 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:24:41.140 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:41.140 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:41.140 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:41.140 11:53:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:41.397 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:41.655 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:41.912 11:53:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.170 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.428 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:42.686 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:42.944 11:53:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:24:43.510 /dev/nbd0 00:24:43.510 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:43.510 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:43.510 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:43.510 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:43.510 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:43.510 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:43.510 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:43.511 1+0 records in 00:24:43.511 1+0 records out 00:24:43.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507175 s, 8.1 MB/s 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:43.511 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:24:43.769 /dev/nbd1 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:43.769 1+0 records in 00:24:43.769 1+0 records out 00:24:43.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585357 s, 7.0 MB/s 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:43.769 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:24:44.027 /dev/nbd10 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:44.027 1+0 records in 00:24:44.027 1+0 records out 00:24:44.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519089 s, 7.9 MB/s 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:44.027 11:53:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:24:44.285 /dev/nbd11 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:44.285 1+0 records in 00:24:44.285 1+0 records out 00:24:44.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700967 s, 5.8 MB/s 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:44.285 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:24:44.543 /dev/nbd12 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:44.543 1+0 records in 00:24:44.543 1+0 records out 00:24:44.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663768 s, 6.2 MB/s 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:44.543 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:24:44.801 /dev/nbd13 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:44.801 1+0 records in 00:24:44.801 1+0 records out 00:24:44.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765629 s, 5.3 MB/s 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:44.801 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:45.059 11:53:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd0", 00:24:45.059 "bdev_name": "nvme0n1" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd1", 00:24:45.059 "bdev_name": "nvme1n1" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd10", 00:24:45.059 "bdev_name": "nvme2n1" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd11", 00:24:45.059 "bdev_name": "nvme2n2" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd12", 00:24:45.059 "bdev_name": "nvme2n3" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd13", 00:24:45.059 "bdev_name": "nvme3n1" 00:24:45.059 } 00:24:45.059 ]' 00:24:45.059 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd0", 00:24:45.059 "bdev_name": "nvme0n1" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd1", 00:24:45.059 "bdev_name": "nvme1n1" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd10", 00:24:45.059 "bdev_name": "nvme2n1" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd11", 00:24:45.059 "bdev_name": "nvme2n2" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd12", 00:24:45.059 "bdev_name": "nvme2n3" 00:24:45.059 }, 00:24:45.059 { 00:24:45.059 "nbd_device": "/dev/nbd13", 00:24:45.059 "bdev_name": "nvme3n1" 00:24:45.060 } 00:24:45.060 ]' 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:45.060 /dev/nbd1 00:24:45.060 /dev/nbd10 00:24:45.060 /dev/nbd11 00:24:45.060 /dev/nbd12 00:24:45.060 /dev/nbd13' 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:45.060 /dev/nbd1 00:24:45.060 /dev/nbd10 00:24:45.060 /dev/nbd11 00:24:45.060 /dev/nbd12 00:24:45.060 /dev/nbd13' 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:45.060 256+0 records in 00:24:45.060 256+0 records out 00:24:45.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663778 s, 158 MB/s 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:45.060 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:45.318 256+0 records in 00:24:45.318 256+0 records out 00:24:45.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112766 s, 9.3 MB/s 00:24:45.318 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:45.318 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:45.318 256+0 records in 00:24:45.318 256+0 records out 00:24:45.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122957 s, 8.5 MB/s 00:24:45.318 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:45.318 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:24:45.575 256+0 records in 00:24:45.575 256+0 records out 00:24:45.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117294 s, 8.9 MB/s 00:24:45.575 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:45.575 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:24:45.575 256+0 records in 00:24:45.575 256+0 records out 00:24:45.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115763 s, 9.1 MB/s 00:24:45.575 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:45.575 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:24:45.832 256+0 records in 00:24:45.832 256+0 records out 00:24:45.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11151 s, 9.4 MB/s 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:24:45.832 256+0 records in 00:24:45.832 256+0 records out 00:24:45.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127516 s, 8.2 MB/s 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.832 11:53:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.090 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.347 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:24:46.604 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.862 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.120 11:53:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.378 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:47.636 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:47.893 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:47.893 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:47.893 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:48.150 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:48.150 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:48.150 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:48.150 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:48.150 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:48.150 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:48.150 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:24:48.151 11:53:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:48.408 malloc_lvol_verify 00:24:48.408 11:53:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:48.666 67b1654b-3242-4314-823a-0d88e25d8f85 00:24:48.666 11:53:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:48.924 ebe52293-7012-4438-a97e-48492d2d683e 00:24:48.924 11:53:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:49.491 /dev/nbd0 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:24:49.491 mke2fs 1.46.5 (30-Dec-2021) 00:24:49.491 Discarding device blocks: 0/4096 done 00:24:49.491 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:49.491 00:24:49.491 Allocating group tables: 0/1 done 00:24:49.491 Writing inode tables: 0/1 done 00:24:49.491 Creating journal (1024 blocks): done 00:24:49.491 Writing superblocks and filesystem accounting information: 0/1 done 00:24:49.491 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.491 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75215 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75215 ']' 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75215 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75215 00:24:49.750 killing process with pid 75215 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75215' 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75215 00:24:49.750 11:53:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75215 00:24:51.127 ************************************ 00:24:51.127 END TEST bdev_nbd 00:24:51.127 ************************************ 00:24:51.127 11:53:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:51.127 00:24:51.127 real 0m12.950s 00:24:51.127 user 0m18.553s 00:24:51.127 sys 0m4.163s 00:24:51.127 11:53:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.127 11:53:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:51.127 11:53:47 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:24:51.127 11:53:47 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:24:51.127 11:53:47 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:24:51.127 11:53:47 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:24:51.127 11:53:47 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:51.127 11:53:47 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:51.127 11:53:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:51.127 ************************************ 00:24:51.127 START TEST bdev_fio 00:24:51.127 ************************************ 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:51.127 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:51.127 ************************************ 00:24:51.127 START TEST bdev_fio_rw_verify 00:24:51.127 ************************************ 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:51.127 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:51.128 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:51.128 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:24:51.128 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:51.128 11:53:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:51.128 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:51.128 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:51.128 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:51.128 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:51.128 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:51.128 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:51.128 fio-3.35 00:24:51.128 Starting 6 threads 00:25:03.333 00:25:03.333 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75648: Thu Jul 25 11:53:58 2024 00:25:03.333 read: IOPS=28.1k, BW=110MiB/s (115MB/s)(1097MiB/10001msec) 00:25:03.333 slat (usec): min=3, max=1517, avg= 7.18, stdev= 4.89 00:25:03.333 clat (usec): min=121, max=5666, avg=663.99, stdev=241.19 00:25:03.333 lat (usec): min=128, max=5679, avg=671.16, stdev=241.81 00:25:03.333 clat percentiles (usec): 00:25:03.333 | 50.000th=[ 685], 99.000th=[ 1237], 99.900th=[ 1844], 99.990th=[ 5473], 00:25:03.333 | 99.999th=[ 5669] 00:25:03.333 write: IOPS=28.4k, BW=111MiB/s (116MB/s)(1108MiB/10001msec); 0 zone resets 00:25:03.333 slat (usec): min=14, max=1820, avg=28.04, stdev=25.63 00:25:03.333 clat (usec): min=103, max=6058, avg=738.81, stdev=256.98 00:25:03.333 lat (usec): min=122, max=6115, avg=766.85, stdev=258.86 00:25:03.333 clat percentiles (usec): 00:25:03.333 | 50.000th=[ 750], 99.000th=[ 1385], 99.900th=[ 2245], 99.990th=[ 5407], 00:25:03.333 | 99.999th=[ 5932] 00:25:03.333 bw ( KiB/s): min=98103, max=141011, per=99.75%, avg=113142.42, stdev=2022.12, samples=114 00:25:03.333 iops : min=24525, max=35252, avg=28285.37, stdev=505.52, samples=114 00:25:03.333 lat (usec) : 250=2.28%, 500=17.88%, 750=37.22%, 1000=34.75% 00:25:03.333 lat (msec) : 2=7.76%, 4=0.07%, 10=0.04% 00:25:03.333 cpu : usr=60.97%, sys=26.09%, ctx=7009, majf=0, minf=24024 00:25:03.333 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:03.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.333 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.333 issued rwts: total=280742,283581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.333 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:03.333 00:25:03.333 Run status group 0 (all jobs): 00:25:03.333 READ: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1097MiB (1150MB), run=10001-10001msec 00:25:03.333 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=1108MiB (1162MB), run=10001-10001msec 00:25:03.333 ----------------------------------------------------- 00:25:03.333 Suppressions used: 00:25:03.333 count bytes template 00:25:03.333 6 48 /usr/src/fio/parse.c 00:25:03.333 2639 253344 /usr/src/fio/iolog.c 00:25:03.333 1 8 libtcmalloc_minimal.so 00:25:03.333 1 904 libcrypto.so 00:25:03.333 ----------------------------------------------------- 00:25:03.333 00:25:03.333 00:25:03.333 real 0m12.401s 00:25:03.333 user 0m38.536s 00:25:03.333 sys 0m16.009s 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:25:03.333 ************************************ 00:25:03.333 END TEST bdev_fio_rw_verify 00:25:03.333 ************************************ 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:25:03.333 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:25:03.334 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ab3d696f-8217-4a0c-8ec5-7352e2fbdca6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ab3d696f-8217-4a0c-8ec5-7352e2fbdca6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a56477bc-a82d-40fa-a895-c8f9806dd013"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a56477bc-a82d-40fa-a895-c8f9806dd013",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b34bba2e-663f-46bc-85ec-3c5071cbb5c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b34bba2e-663f-46bc-85ec-3c5071cbb5c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6616b5a9-aab5-4ee7-a24f-f14408d2169c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6616b5a9-aab5-4ee7-a24f-f14408d2169c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "64523532-f90f-42d2-82e0-82dced4e4f73"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "64523532-f90f-42d2-82e0-82dced4e4f73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6bda4b84-b6ca-4340-bf55-12892bbe6c6b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6bda4b84-b6ca-4340-bf55-12892bbe6c6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:25:03.592 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:25:03.592 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:03.592 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:25:03.592 /home/vagrant/spdk_repo/spdk 00:25:03.592 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:25:03.592 11:54:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:25:03.592 00:25:03.592 real 0m12.574s 00:25:03.592 user 0m38.626s 00:25:03.592 sys 0m16.090s 00:25:03.592 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.592 11:54:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:03.592 ************************************ 00:25:03.592 END TEST bdev_fio 00:25:03.592 ************************************ 00:25:03.592 11:54:00 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:03.592 11:54:00 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:03.592 11:54:00 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:25:03.592 11:54:00 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:03.592 11:54:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:03.592 ************************************ 00:25:03.592 START TEST bdev_verify 00:25:03.592 ************************************ 00:25:03.592 11:54:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:03.592 [2024-07-25 11:54:00.522463] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:03.592 [2024-07-25 11:54:00.522655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75825 ] 00:25:03.851 [2024-07-25 11:54:00.700738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:04.109 [2024-07-25 11:54:00.889212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.109 [2024-07-25 11:54:00.889225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.366 Running I/O for 5 seconds... 00:25:09.637 00:25:09.637 Latency(us) 00:25:09.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.637 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x0 length 0xa0000 00:25:09.637 nvme0n1 : 5.08 1662.60 6.49 0.00 0.00 76833.46 9294.20 71493.82 00:25:09.637 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0xa0000 length 0xa0000 00:25:09.637 nvme0n1 : 5.04 1725.67 6.74 0.00 0.00 74024.15 9294.20 68634.07 00:25:09.637 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x0 length 0xbd0bd 00:25:09.637 nvme1n1 : 5.06 2691.83 10.51 0.00 0.00 47328.04 5481.19 60769.75 00:25:09.637 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:25:09.637 nvme1n1 : 5.04 2784.56 10.88 0.00 0.00 45682.15 5153.51 57433.37 00:25:09.637 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x0 length 0x80000 00:25:09.637 nvme2n1 : 5.07 1665.92 6.51 0.00 0.00 76177.64 6791.91 71493.82 00:25:09.637 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x80000 length 0x80000 00:25:09.637 nvme2n1 : 5.05 1750.09 6.84 0.00 0.00 72728.97 8877.15 71970.44 00:25:09.637 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x0 length 0x80000 00:25:09.637 nvme2n2 : 5.08 1663.41 6.50 0.00 0.00 76136.06 10902.81 67680.81 00:25:09.637 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x80000 length 0x80000 00:25:09.637 nvme2n2 : 5.07 1742.88 6.81 0.00 0.00 72872.23 8638.84 71970.44 00:25:09.637 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x0 length 0x80000 00:25:09.637 nvme2n3 : 5.08 1661.68 6.49 0.00 0.00 76080.97 8698.41 62914.56 00:25:09.637 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x80000 length 0x80000 00:25:09.637 nvme2n3 : 5.05 1724.13 6.73 0.00 0.00 73525.92 14358.34 57909.99 00:25:09.637 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x0 length 0x20000 00:25:09.637 nvme3n1 : 5.09 1660.94 6.49 0.00 0.00 76054.99 8221.79 72447.07 00:25:09.637 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:09.637 Verification LBA range: start 0x20000 length 0x20000 00:25:09.637 nvme3n1 : 5.07 1741.81 6.80 0.00 0.00 72640.97 4081.11 68634.07 00:25:09.637 =================================================================================================================== 00:25:09.637 Total : 22475.53 87.80 0.00 0.00 67822.16 4081.11 72447.07 00:25:11.014 00:25:11.014 real 0m7.219s 00:25:11.014 user 0m11.351s 00:25:11.014 sys 0m1.666s 00:25:11.014 11:54:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.014 11:54:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:11.015 ************************************ 00:25:11.015 END TEST bdev_verify 00:25:11.015 ************************************ 00:25:11.015 11:54:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:11.015 11:54:07 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:25:11.015 11:54:07 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.015 11:54:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:11.015 ************************************ 00:25:11.015 START TEST bdev_verify_big_io 00:25:11.015 ************************************ 00:25:11.015 11:54:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:11.015 [2024-07-25 11:54:07.811352] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:11.015 [2024-07-25 11:54:07.811586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75926 ] 00:25:11.015 [2024-07-25 11:54:07.977544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:11.274 [2024-07-25 11:54:08.165440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.274 [2024-07-25 11:54:08.165449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.840 Running I/O for 5 seconds... 00:25:18.417 00:25:18.417 Latency(us) 00:25:18.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.417 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:18.417 Verification LBA range: start 0x0 length 0xa000 00:25:18.417 nvme0n1 : 6.02 112.96 7.06 0.00 0.00 1071432.64 84362.71 1098145.05 00:25:18.417 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:18.417 Verification LBA range: start 0xa000 length 0xa000 00:25:18.417 nvme0n1 : 6.07 92.24 5.77 0.00 0.00 1354332.08 189696.93 2013265.92 00:25:18.417 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:18.417 Verification LBA range: start 0x0 length 0xbd0b 00:25:18.417 nvme1n1 : 6.02 127.52 7.97 0.00 0.00 931493.08 83886.08 838860.80 00:25:18.418 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0xbd0b length 0xbd0b 00:25:18.418 nvme1n1 : 6.06 100.36 6.27 0.00 0.00 1203410.78 10724.07 2470826.36 00:25:18.418 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x0 length 0x8000 00:25:18.418 nvme2n1 : 5.92 81.12 5.07 0.00 0.00 1426547.93 231639.97 2394566.28 00:25:18.418 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x8000 length 0x8000 00:25:18.418 nvme2n1 : 6.04 129.78 8.11 0.00 0.00 899165.32 23354.65 930372.89 00:25:18.418 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x0 length 0x8000 00:25:18.418 nvme2n2 : 6.03 135.31 8.46 0.00 0.00 833515.70 105810.85 1151527.10 00:25:18.418 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x8000 length 0x8000 00:25:18.418 nvme2n2 : 6.05 121.75 7.61 0.00 0.00 919867.05 152520.15 1555705.48 00:25:18.418 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x0 length 0x8000 00:25:18.418 nvme2n3 : 6.03 100.89 6.31 0.00 0.00 1081953.50 79596.45 1944631.85 00:25:18.418 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x8000 length 0x8000 00:25:18.418 nvme2n3 : 6.06 113.51 7.09 0.00 0.00 967509.32 10009.13 2028517.93 00:25:18.418 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x0 length 0x2000 00:25:18.418 nvme3n1 : 6.04 109.93 6.87 0.00 0.00 968808.35 4468.36 2958890.82 00:25:18.418 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:18.418 Verification LBA range: start 0x2000 length 0x2000 00:25:18.418 nvme3n1 : 6.07 113.42 7.09 0.00 0.00 933859.68 10187.87 2318306.21 00:25:18.418 =================================================================================================================== 00:25:18.418 Total : 1338.79 83.67 0.00 0.00 1026012.19 4468.36 2958890.82 00:25:19.361 00:25:19.361 real 0m8.524s 00:25:19.361 user 0m15.344s 00:25:19.361 sys 0m0.467s 00:25:19.361 11:54:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.361 11:54:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:19.361 ************************************ 00:25:19.361 END TEST bdev_verify_big_io 00:25:19.361 ************************************ 00:25:19.361 11:54:16 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:19.361 11:54:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:25:19.361 11:54:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.361 11:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:19.361 ************************************ 00:25:19.361 START TEST bdev_write_zeroes 00:25:19.361 ************************************ 00:25:19.361 11:54:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:19.361 [2024-07-25 11:54:16.378356] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:19.361 [2024-07-25 11:54:16.378547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76041 ] 00:25:19.620 [2024-07-25 11:54:16.544128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.878 [2024-07-25 11:54:16.782810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.445 Running I/O for 1 seconds... 00:25:21.382 00:25:21.382 Latency(us) 00:25:21.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.382 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:21.382 nvme0n1 : 1.02 8686.68 33.93 0.00 0.00 14717.38 8936.73 30384.87 00:25:21.382 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:21.382 nvme1n1 : 1.02 13306.07 51.98 0.00 0.00 9596.56 5510.98 23712.12 00:25:21.382 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:21.382 nvme2n1 : 1.02 8628.08 33.70 0.00 0.00 14710.34 8877.15 27405.96 00:25:21.382 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:21.382 nvme2n2 : 1.03 8614.54 33.65 0.00 0.00 14715.08 8877.15 27405.96 00:25:21.382 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:21.382 nvme2n3 : 1.03 8601.19 33.60 0.00 0.00 14725.08 8877.15 27286.81 00:25:21.382 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:21.382 nvme3n1 : 1.03 8587.71 33.55 0.00 0.00 14733.19 8877.15 27286.81 00:25:21.382 =================================================================================================================== 00:25:21.382 Total : 56424.27 220.41 0.00 0.00 13513.81 5510.98 30384.87 00:25:22.759 00:25:22.759 real 0m3.254s 00:25:22.759 user 0m2.500s 00:25:22.759 sys 0m0.574s 00:25:22.759 11:54:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.759 11:54:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:22.759 ************************************ 00:25:22.759 END TEST bdev_write_zeroes 00:25:22.759 ************************************ 00:25:22.759 11:54:19 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:22.759 11:54:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:25:22.759 11:54:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:22.759 11:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:22.759 ************************************ 00:25:22.759 START TEST bdev_json_nonenclosed 00:25:22.759 ************************************ 00:25:22.759 11:54:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:22.759 [2024-07-25 11:54:19.679438] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:22.759 [2024-07-25 11:54:19.679608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76098 ] 00:25:23.018 [2024-07-25 11:54:19.856049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.277 [2024-07-25 11:54:20.085879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.277 [2024-07-25 11:54:20.086004] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:23.277 [2024-07-25 11:54:20.086041] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:23.277 [2024-07-25 11:54:20.086062] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:23.535 00:25:23.535 real 0m0.969s 00:25:23.535 user 0m0.731s 00:25:23.535 sys 0m0.130s 00:25:23.535 11:54:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:23.535 11:54:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:23.535 ************************************ 00:25:23.535 END TEST bdev_json_nonenclosed 00:25:23.535 ************************************ 00:25:23.794 11:54:20 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:23.794 11:54:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:25:23.794 11:54:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:23.794 11:54:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:23.794 ************************************ 00:25:23.794 START TEST bdev_json_nonarray 00:25:23.794 ************************************ 00:25:23.794 11:54:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:23.794 [2024-07-25 11:54:20.706107] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:23.794 [2024-07-25 11:54:20.706313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76129 ] 00:25:24.052 [2024-07-25 11:54:20.878571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.311 [2024-07-25 11:54:21.121546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.311 [2024-07-25 11:54:21.121685] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:24.311 [2024-07-25 11:54:21.121742] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:24.311 [2024-07-25 11:54:21.121762] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:24.569 00:25:24.569 real 0m0.978s 00:25:24.569 user 0m0.739s 00:25:24.569 sys 0m0.130s 00:25:24.569 11:54:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.569 11:54:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:24.569 ************************************ 00:25:24.569 END TEST bdev_json_nonarray 00:25:24.569 ************************************ 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:25:24.828 11:54:21 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:25.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:33.509 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:33.509 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:33.509 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:33.509 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:33.509 00:25:33.509 real 1m9.209s 00:25:33.509 user 1m46.544s 00:25:33.509 sys 0m39.103s 00:25:33.509 11:54:29 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:33.509 ************************************ 00:25:33.509 END TEST blockdev_xnvme 00:25:33.509 ************************************ 00:25:33.509 11:54:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:33.509 11:54:29 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:33.509 11:54:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:33.509 11:54:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:33.509 11:54:29 -- common/autotest_common.sh@10 -- # set +x 00:25:33.509 ************************************ 00:25:33.509 START TEST ublk 00:25:33.509 ************************************ 00:25:33.509 11:54:29 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:33.509 * Looking for test storage... 00:25:33.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:25:33.509 11:54:29 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:25:33.509 11:54:29 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:25:33.509 11:54:29 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:25:33.509 11:54:29 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:25:33.509 11:54:29 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:25:33.509 11:54:29 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:25:33.509 11:54:29 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:25:33.509 11:54:29 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:25:33.509 11:54:29 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:25:33.509 11:54:29 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:33.509 11:54:29 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:33.509 11:54:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:33.509 ************************************ 00:25:33.509 START TEST test_save_ublk_config 00:25:33.509 ************************************ 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:25:33.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76435 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76435 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76435 ']' 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:33.509 11:54:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:33.509 [2024-07-25 11:54:30.100033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:33.509 [2024-07-25 11:54:30.100414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76435 ] 00:25:33.509 [2024-07-25 11:54:30.271358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.509 [2024-07-25 11:54:30.511762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.443 11:54:31 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:34.443 11:54:31 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:25:34.443 11:54:31 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:25:34.443 11:54:31 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:25:34.443 11:54:31 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.443 11:54:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:35.382 [2024-07-25 11:54:32.371740] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:35.382 [2024-07-25 11:54:32.373110] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:35.382 malloc0 00:25:35.659 [2024-07-25 11:54:32.413587] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:35.659 [2024-07-25 11:54:32.413867] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:35.659 [2024-07-25 11:54:32.413938] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:35.659 [2024-07-25 11:54:32.414060] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:35.916 [2024-07-25 11:54:32.941762] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:35.916 [2024-07-25 11:54:32.941819] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:35.916 [2024-07-25 11:54:32.949745] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:35.916 [2024-07-25 11:54:32.949903] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:36.174 [2024-07-25 11:54:32.966722] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:36.174 0 00:25:36.174 11:54:32 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.174 11:54:32 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:25:36.174 11:54:32 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.174 11:54:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:36.434 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.434 11:54:33 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:25:36.434 "subsystems": [ 00:25:36.434 { 00:25:36.434 "subsystem": "keyring", 00:25:36.434 "config": [] 00:25:36.434 }, 00:25:36.434 { 00:25:36.434 "subsystem": "iobuf", 00:25:36.434 "config": [ 00:25:36.434 { 00:25:36.434 "method": "iobuf_set_options", 00:25:36.434 "params": { 00:25:36.434 "small_pool_count": 8192, 00:25:36.434 "large_pool_count": 1024, 00:25:36.434 "small_bufsize": 8192, 00:25:36.434 "large_bufsize": 135168 00:25:36.434 } 00:25:36.434 } 00:25:36.434 ] 00:25:36.434 }, 00:25:36.434 { 00:25:36.434 "subsystem": "sock", 00:25:36.434 "config": [ 00:25:36.434 { 00:25:36.434 "method": "sock_set_default_impl", 00:25:36.435 "params": { 00:25:36.435 "impl_name": "posix" 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "sock_impl_set_options", 00:25:36.435 "params": { 00:25:36.435 "impl_name": "ssl", 00:25:36.435 "recv_buf_size": 4096, 00:25:36.435 "send_buf_size": 4096, 00:25:36.435 "enable_recv_pipe": true, 00:25:36.435 "enable_quickack": false, 00:25:36.435 "enable_placement_id": 0, 00:25:36.435 "enable_zerocopy_send_server": true, 00:25:36.435 "enable_zerocopy_send_client": false, 00:25:36.435 "zerocopy_threshold": 0, 00:25:36.435 "tls_version": 0, 00:25:36.435 "enable_ktls": false 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "sock_impl_set_options", 00:25:36.435 "params": { 00:25:36.435 "impl_name": "posix", 00:25:36.435 "recv_buf_size": 2097152, 00:25:36.435 "send_buf_size": 2097152, 00:25:36.435 "enable_recv_pipe": true, 00:25:36.435 "enable_quickack": false, 00:25:36.435 "enable_placement_id": 0, 00:25:36.435 "enable_zerocopy_send_server": true, 00:25:36.435 "enable_zerocopy_send_client": false, 00:25:36.435 "zerocopy_threshold": 0, 00:25:36.435 "tls_version": 0, 00:25:36.435 "enable_ktls": false 00:25:36.435 } 00:25:36.435 } 00:25:36.435 ] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "vmd", 00:25:36.435 "config": [] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "accel", 00:25:36.435 "config": [ 00:25:36.435 { 00:25:36.435 "method": "accel_set_options", 00:25:36.435 "params": { 00:25:36.435 "small_cache_size": 128, 00:25:36.435 "large_cache_size": 16, 00:25:36.435 "task_count": 2048, 00:25:36.435 "sequence_count": 2048, 00:25:36.435 "buf_count": 2048 00:25:36.435 } 00:25:36.435 } 00:25:36.435 ] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "bdev", 00:25:36.435 "config": [ 00:25:36.435 { 00:25:36.435 "method": "bdev_set_options", 00:25:36.435 "params": { 00:25:36.435 "bdev_io_pool_size": 65535, 00:25:36.435 "bdev_io_cache_size": 256, 00:25:36.435 "bdev_auto_examine": true, 00:25:36.435 "iobuf_small_cache_size": 128, 00:25:36.435 "iobuf_large_cache_size": 16 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "bdev_raid_set_options", 00:25:36.435 "params": { 00:25:36.435 "process_window_size_kb": 1024, 00:25:36.435 "process_max_bandwidth_mb_sec": 0 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "bdev_iscsi_set_options", 00:25:36.435 "params": { 00:25:36.435 "timeout_sec": 30 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "bdev_nvme_set_options", 00:25:36.435 "params": { 00:25:36.435 "action_on_timeout": "none", 00:25:36.435 "timeout_us": 0, 00:25:36.435 "timeout_admin_us": 0, 00:25:36.435 "keep_alive_timeout_ms": 10000, 00:25:36.435 "arbitration_burst": 0, 00:25:36.435 "low_priority_weight": 0, 00:25:36.435 "medium_priority_weight": 0, 00:25:36.435 "high_priority_weight": 0, 00:25:36.435 "nvme_adminq_poll_period_us": 10000, 00:25:36.435 "nvme_ioq_poll_period_us": 0, 00:25:36.435 "io_queue_requests": 0, 00:25:36.435 "delay_cmd_submit": true, 00:25:36.435 "transport_retry_count": 4, 00:25:36.435 "bdev_retry_count": 3, 00:25:36.435 "transport_ack_timeout": 0, 00:25:36.435 "ctrlr_loss_timeout_sec": 0, 00:25:36.435 "reconnect_delay_sec": 0, 00:25:36.435 "fast_io_fail_timeout_sec": 0, 00:25:36.435 "disable_auto_failback": false, 00:25:36.435 "generate_uuids": false, 00:25:36.435 "transport_tos": 0, 00:25:36.435 "nvme_error_stat": false, 00:25:36.435 "rdma_srq_size": 0, 00:25:36.435 "io_path_stat": false, 00:25:36.435 "allow_accel_sequence": false, 00:25:36.435 "rdma_max_cq_size": 0, 00:25:36.435 "rdma_cm_event_timeout_ms": 0, 00:25:36.435 "dhchap_digests": [ 00:25:36.435 "sha256", 00:25:36.435 "sha384", 00:25:36.435 "sha512" 00:25:36.435 ], 00:25:36.435 "dhchap_dhgroups": [ 00:25:36.435 "null", 00:25:36.435 "ffdhe2048", 00:25:36.435 "ffdhe3072", 00:25:36.435 "ffdhe4096", 00:25:36.435 "ffdhe6144", 00:25:36.435 "ffdhe8192" 00:25:36.435 ] 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "bdev_nvme_set_hotplug", 00:25:36.435 "params": { 00:25:36.435 "period_us": 100000, 00:25:36.435 "enable": false 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "bdev_malloc_create", 00:25:36.435 "params": { 00:25:36.435 "name": "malloc0", 00:25:36.435 "num_blocks": 8192, 00:25:36.435 "block_size": 4096, 00:25:36.435 "physical_block_size": 4096, 00:25:36.435 "uuid": "a7a0f2a9-cf3a-45a3-92c9-8ca95aa2a381", 00:25:36.435 "optimal_io_boundary": 0, 00:25:36.435 "md_size": 0, 00:25:36.435 "dif_type": 0, 00:25:36.435 "dif_is_head_of_md": false, 00:25:36.435 "dif_pi_format": 0 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "bdev_wait_for_examine" 00:25:36.435 } 00:25:36.435 ] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "scsi", 00:25:36.435 "config": null 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "scheduler", 00:25:36.435 "config": [ 00:25:36.435 { 00:25:36.435 "method": "framework_set_scheduler", 00:25:36.435 "params": { 00:25:36.435 "name": "static" 00:25:36.435 } 00:25:36.435 } 00:25:36.435 ] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "vhost_scsi", 00:25:36.435 "config": [] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "vhost_blk", 00:25:36.435 "config": [] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "ublk", 00:25:36.435 "config": [ 00:25:36.435 { 00:25:36.435 "method": "ublk_create_target", 00:25:36.435 "params": { 00:25:36.435 "cpumask": "1" 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "ublk_start_disk", 00:25:36.435 "params": { 00:25:36.435 "bdev_name": "malloc0", 00:25:36.435 "ublk_id": 0, 00:25:36.435 "num_queues": 1, 00:25:36.435 "queue_depth": 128 00:25:36.435 } 00:25:36.435 } 00:25:36.435 ] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "nbd", 00:25:36.435 "config": [] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "nvmf", 00:25:36.435 "config": [ 00:25:36.435 { 00:25:36.435 "method": "nvmf_set_config", 00:25:36.435 "params": { 00:25:36.435 "discovery_filter": "match_any", 00:25:36.435 "admin_cmd_passthru": { 00:25:36.435 "identify_ctrlr": false 00:25:36.435 } 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "nvmf_set_max_subsystems", 00:25:36.435 "params": { 00:25:36.435 "max_subsystems": 1024 00:25:36.435 } 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "method": "nvmf_set_crdt", 00:25:36.435 "params": { 00:25:36.435 "crdt1": 0, 00:25:36.435 "crdt2": 0, 00:25:36.435 "crdt3": 0 00:25:36.435 } 00:25:36.435 } 00:25:36.435 ] 00:25:36.435 }, 00:25:36.435 { 00:25:36.435 "subsystem": "iscsi", 00:25:36.435 "config": [ 00:25:36.435 { 00:25:36.435 "method": "iscsi_set_options", 00:25:36.435 "params": { 00:25:36.435 "node_base": "iqn.2016-06.io.spdk", 00:25:36.435 "max_sessions": 128, 00:25:36.435 "max_connections_per_session": 2, 00:25:36.435 "max_queue_depth": 64, 00:25:36.435 "default_time2wait": 2, 00:25:36.435 "default_time2retain": 20, 00:25:36.435 "first_burst_length": 8192, 00:25:36.435 "immediate_data": true, 00:25:36.435 "allow_duplicated_isid": false, 00:25:36.435 "error_recovery_level": 0, 00:25:36.435 "nop_timeout": 60, 00:25:36.435 "nop_in_interval": 30, 00:25:36.435 "disable_chap": false, 00:25:36.435 "require_chap": false, 00:25:36.435 "mutual_chap": false, 00:25:36.435 "chap_group": 0, 00:25:36.435 "max_large_datain_per_connection": 64, 00:25:36.435 "max_r2t_per_connection": 4, 00:25:36.435 "pdu_pool_size": 36864, 00:25:36.435 "immediate_data_pool_size": 16384, 00:25:36.435 "data_out_pool_size": 2048 00:25:36.435 } 00:25:36.435 } 00:25:36.435 ] 00:25:36.435 } 00:25:36.435 ] 00:25:36.436 }' 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76435 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76435 ']' 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76435 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76435 00:25:36.436 killing process with pid 76435 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76435' 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76435 00:25:36.436 11:54:33 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76435 00:25:37.809 [2024-07-25 11:54:34.560234] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:37.809 [2024-07-25 11:54:34.591795] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:37.809 [2024-07-25 11:54:34.591988] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:37.810 [2024-07-25 11:54:34.601762] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:37.810 [2024-07-25 11:54:34.601834] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:37.810 [2024-07-25 11:54:34.601849] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:37.810 [2024-07-25 11:54:34.601887] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:25:37.810 [2024-07-25 11:54:34.602077] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:25:39.186 11:54:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:25:39.186 11:54:35 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76512 00:25:39.186 11:54:35 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76512 00:25:39.186 11:54:35 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76512 ']' 00:25:39.186 11:54:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.186 11:54:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:25:39.186 "subsystems": [ 00:25:39.186 { 00:25:39.186 "subsystem": "keyring", 00:25:39.186 "config": [] 00:25:39.186 }, 00:25:39.186 { 00:25:39.186 "subsystem": "iobuf", 00:25:39.186 "config": [ 00:25:39.186 { 00:25:39.186 "method": "iobuf_set_options", 00:25:39.186 "params": { 00:25:39.186 "small_pool_count": 8192, 00:25:39.186 "large_pool_count": 1024, 00:25:39.186 "small_bufsize": 8192, 00:25:39.186 "large_bufsize": 135168 00:25:39.186 } 00:25:39.186 } 00:25:39.186 ] 00:25:39.186 }, 00:25:39.186 { 00:25:39.186 "subsystem": "sock", 00:25:39.186 "config": [ 00:25:39.186 { 00:25:39.186 "method": "sock_set_default_impl", 00:25:39.186 "params": { 00:25:39.186 "impl_name": "posix" 00:25:39.186 } 00:25:39.186 }, 00:25:39.186 { 00:25:39.186 "method": "sock_impl_set_options", 00:25:39.186 "params": { 00:25:39.186 "impl_name": "ssl", 00:25:39.186 "recv_buf_size": 4096, 00:25:39.186 "send_buf_size": 4096, 00:25:39.187 "enable_recv_pipe": true, 00:25:39.187 "enable_quickack": false, 00:25:39.187 "enable_placement_id": 0, 00:25:39.187 "enable_zerocopy_send_server": true, 00:25:39.187 "enable_zerocopy_send_client": false, 00:25:39.187 "zerocopy_threshold": 0, 00:25:39.187 "tls_version": 0, 00:25:39.187 "enable_ktls": false 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "sock_impl_set_options", 00:25:39.187 "params": { 00:25:39.187 "impl_name": "posix", 00:25:39.187 "recv_buf_size": 2097152, 00:25:39.187 "send_buf_size": 2097152, 00:25:39.187 "enable_recv_pipe": true, 00:25:39.187 "enable_quickack": false, 00:25:39.187 "enable_placement_id": 0, 00:25:39.187 "enable_zerocopy_send_server": true, 00:25:39.187 "enable_zerocopy_send_client": false, 00:25:39.187 "zerocopy_threshold": 0, 00:25:39.187 "tls_version": 0, 00:25:39.187 "enable_ktls": false 00:25:39.187 } 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "vmd", 00:25:39.187 "config": [] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "accel", 00:25:39.187 "config": [ 00:25:39.187 { 00:25:39.187 "method": "accel_set_options", 00:25:39.187 "params": { 00:25:39.187 "small_cache_size": 128, 00:25:39.187 "large_cache_size": 16, 00:25:39.187 "task_count": 2048, 00:25:39.187 "sequence_count": 2048, 00:25:39.187 "buf_count": 2048 00:25:39.187 } 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "bdev", 00:25:39.187 "config": [ 00:25:39.187 { 00:25:39.187 "method": "bdev_set_options", 00:25:39.187 "params": { 00:25:39.187 "bdev_io_pool_size": 65535, 00:25:39.187 "bdev_io_cache_size": 256, 00:25:39.187 "bdev_auto_examine": true, 00:25:39.187 "iobuf_small_cache_size": 128, 00:25:39.187 "iobuf_large_cache_size": 16 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "bdev_raid_set_options", 00:25:39.187 "params": { 00:25:39.187 "process_window_size_kb": 1024, 00:25:39.187 "process_max_bandwidth_mb_sec": 0 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "bdev_iscsi_set_options", 00:25:39.187 "params": { 00:25:39.187 "timeout_sec": 30 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "bdev_nvme_set_options", 00:25:39.187 "params": { 00:25:39.187 "action_on_timeout": "none", 00:25:39.187 "timeout_us": 0, 00:25:39.187 "timeout_admin_us": 0, 00:25:39.187 "keep_alive_timeout_ms": 10000, 00:25:39.187 "arbitration_burst": 0, 00:25:39.187 "low_priority_weight": 0, 00:25:39.187 "medium_priority_weight": 0, 00:25:39.187 "high_priority_weight": 0, 00:25:39.187 "nvme_adminq_poll_period_us": 10000, 00:25:39.187 "nvme_ioq_poll_period_us": 0, 00:25:39.187 "io_queue_requests": 0, 00:25:39.187 "delay_cmd_submit": true, 00:25:39.187 "transport_retry_count": 4, 00:25:39.187 "bdev_retry_count": 3, 00:25:39.187 "transport_ack_timeout": 0, 00:25:39.187 "ctrlr_loss_timeout_sec": 0, 00:25:39.187 "reconnect_delay_sec": 0, 00:25:39.187 "fast_io_fail_timeout_sec": 0, 00:25:39.187 "disable_auto_failback": false, 00:25:39.187 "generate_uuids": false, 00:25:39.187 "transport_tos": 0, 00:25:39.187 "nvme_error_stat": false, 00:25:39.187 "rdma_srq_size": 0, 00:25:39.187 "io_path_stat": false, 00:25:39.187 "allow_accel_sequence": false, 00:25:39.187 "rdma_max_cq_size": 0, 00:25:39.187 "rdma_cm_event_timeout_ms": 0, 00:25:39.187 "dhchap_digests": [ 00:25:39.187 "sha256", 00:25:39.187 "sha384", 00:25:39.187 "sha512" 00:25:39.187 ], 00:25:39.187 "dhchap_dhgroups": [ 00:25:39.187 "null", 00:25:39.187 "ffdhe2048", 00:25:39.187 "ffdhe3072", 00:25:39.187 "ffdhe4096", 00:25:39.187 "ffdhe6144", 00:25:39.187 "ffdhe8192" 00:25:39.187 ] 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "bdev_nvme_set_hotplug", 00:25:39.187 "params": { 00:25:39.187 "period_us": 100000, 00:25:39.187 "enable": false 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "bdev_malloc_create", 00:25:39.187 "params": { 00:25:39.187 "name": "malloc0", 00:25:39.187 "num_blocks": 8192, 00:25:39.187 "block_size": 4096, 00:25:39.187 "physical_block_size": 4096, 00:25:39.187 "uuid": "a7a0f2a9-cf3a-45a3-92c9-8ca95aa2a381", 00:25:39.187 "optimal_io_boundary": 0, 00:25:39.187 "md_size": 0, 00:25:39.187 "dif_type": 0, 00:25:39.187 "dif_is_head_of_md": false, 00:25:39.187 "dif_pi_format": 0 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "bdev_wait_for_examine" 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "scsi", 00:25:39.187 "config": null 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "scheduler", 00:25:39.187 "config": [ 00:25:39.187 { 00:25:39.187 "method": "framework_set_scheduler", 00:25:39.187 "params": { 00:25:39.187 "name": "static" 00:25:39.187 } 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "vhost_scsi", 00:25:39.187 "config": [] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "vhost_blk", 00:25:39.187 "config": [] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "ublk", 00:25:39.187 "config": [ 00:25:39.187 { 00:25:39.187 "method": "ublk_create_target", 00:25:39.187 "params": { 00:25:39.187 "cpumask": "1" 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "ublk_start_disk", 00:25:39.187 "params": { 00:25:39.187 "bdev_name": "malloc0", 00:25:39.187 "ublk_id": 0, 00:25:39.187 "num_queues": 1, 00:25:39.187 "queue_depth": 128 00:25:39.187 } 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "nbd", 00:25:39.187 "config": [] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "nvmf", 00:25:39.187 "config": [ 00:25:39.187 { 00:25:39.187 "method": "nvmf_set_config", 00:25:39.187 "params": { 00:25:39.187 "discovery_filter": "match_any", 00:25:39.187 "admin_cmd_passthru": { 00:25:39.187 "identify_ctrlr": false 00:25:39.187 } 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "nvmf_set_max_subsystems", 00:25:39.187 "params": { 00:25:39.187 "max_subsystems": 1024 00:25:39.187 } 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "method": "nvmf_set_crdt", 00:25:39.187 "params": { 00:25:39.187 "crdt1": 0, 00:25:39.187 "crdt2": 0, 00:25:39.187 "crdt3": 0 00:25:39.187 } 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 }, 00:25:39.187 { 00:25:39.187 "subsystem": "iscsi", 00:25:39.187 "config": [ 00:25:39.187 { 00:25:39.187 "method": "iscsi_set_options", 00:25:39.187 "params": { 00:25:39.187 "node_base": "iqn.2016-06.io.spdk", 00:25:39.187 "max_sessions": 128, 00:25:39.187 "max_connections_per_session": 2, 00:25:39.187 "max_queue_depth": 64, 00:25:39.187 "default_time2wait": 2, 00:25:39.187 "default_time2retain": 20, 00:25:39.187 "first_burst_length": 8192, 00:25:39.187 "immediate_data": true, 00:25:39.187 "allow_duplicated_isid": false, 00:25:39.187 "error_recovery_level": 0, 00:25:39.187 "nop_timeout": 60, 00:25:39.187 "nop_in_interval": 30, 00:25:39.187 "disable_chap": false, 00:25:39.187 "require_chap": false, 00:25:39.187 "mutual_chap": false, 00:25:39.187 "chap_group": 0, 00:25:39.187 "max_large_datain_per_connection": 64, 00:25:39.187 "max_r2t_per_connection": 4, 00:25:39.187 "pdu_pool_size": 36864, 00:25:39.187 "immediate_data_pool_size": 16384, 00:25:39.187 "data_out_pool_size": 2048 00:25:39.187 } 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 } 00:25:39.187 ] 00:25:39.187 }' 00:25:39.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.187 11:54:35 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.187 11:54:35 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.187 11:54:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.187 11:54:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:39.187 [2024-07-25 11:54:35.994723] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:39.187 [2024-07-25 11:54:35.994875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76512 ] 00:25:39.187 [2024-07-25 11:54:36.156878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.446 [2024-07-25 11:54:36.360685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.382 [2024-07-25 11:54:37.230715] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:40.382 [2024-07-25 11:54:37.231794] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:40.382 [2024-07-25 11:54:37.238867] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:40.382 [2024-07-25 11:54:37.238966] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:40.382 [2024-07-25 11:54:37.238981] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:40.382 [2024-07-25 11:54:37.238990] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:40.382 [2024-07-25 11:54:37.247796] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:40.382 [2024-07-25 11:54:37.247823] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:40.382 [2024-07-25 11:54:37.254730] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:40.382 [2024-07-25 11:54:37.254851] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:40.382 [2024-07-25 11:54:37.271716] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76512 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76512 ']' 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76512 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76512 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:40.382 killing process with pid 76512 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76512' 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76512 00:25:40.382 11:54:37 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76512 00:25:42.283 [2024-07-25 11:54:39.157824] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:42.283 [2024-07-25 11:54:39.188829] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:42.283 [2024-07-25 11:54:39.189092] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:42.283 [2024-07-25 11:54:39.194756] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:42.283 [2024-07-25 11:54:39.194876] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:42.283 [2024-07-25 11:54:39.194900] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:42.283 [2024-07-25 11:54:39.194955] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:25:42.283 [2024-07-25 11:54:39.195233] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:25:43.656 11:54:40 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:25:43.656 ************************************ 00:25:43.656 END TEST test_save_ublk_config 00:25:43.656 ************************************ 00:25:43.656 00:25:43.656 real 0m10.447s 00:25:43.656 user 0m6.224s 00:25:43.656 sys 0m2.429s 00:25:43.656 11:54:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:43.656 11:54:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:43.656 11:54:40 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76594 00:25:43.656 11:54:40 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:25:43.656 11:54:40 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:43.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.656 11:54:40 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76594 00:25:43.656 11:54:40 ublk -- common/autotest_common.sh@831 -- # '[' -z 76594 ']' 00:25:43.656 11:54:40 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.656 11:54:40 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:43.656 11:54:40 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.656 11:54:40 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:43.657 11:54:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:43.657 [2024-07-25 11:54:40.562232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:43.657 [2024-07-25 11:54:40.562407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76594 ] 00:25:43.914 [2024-07-25 11:54:40.727861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:43.914 [2024-07-25 11:54:40.914440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.914 [2024-07-25 11:54:40.914448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.849 11:54:41 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:44.849 11:54:41 ublk -- common/autotest_common.sh@864 -- # return 0 00:25:44.849 11:54:41 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:25:44.849 11:54:41 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:44.849 11:54:41 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:44.849 11:54:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:44.849 ************************************ 00:25:44.849 START TEST test_create_ublk 00:25:44.849 ************************************ 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:25:44.849 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:44.849 [2024-07-25 11:54:41.636717] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:44.849 [2024-07-25 11:54:41.639365] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.849 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:25:44.849 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.849 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:25:44.849 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.849 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:45.108 [2024-07-25 11:54:41.891889] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:25:45.108 [2024-07-25 11:54:41.892398] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:25:45.108 [2024-07-25 11:54:41.892425] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:45.108 [2024-07-25 11:54:41.892439] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:45.108 [2024-07-25 11:54:41.899753] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:45.108 [2024-07-25 11:54:41.899799] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:45.108 [2024-07-25 11:54:41.907756] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:45.108 [2024-07-25 11:54:41.918987] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:45.108 [2024-07-25 11:54:41.949765] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:45.108 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.108 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:25:45.108 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:25:45.108 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:25:45.108 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.108 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:45.108 11:54:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.108 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:25:45.108 { 00:25:45.108 "ublk_device": "/dev/ublkb0", 00:25:45.108 "id": 0, 00:25:45.108 "queue_depth": 512, 00:25:45.108 "num_queues": 4, 00:25:45.108 "bdev_name": "Malloc0" 00:25:45.108 } 00:25:45.108 ]' 00:25:45.108 11:54:41 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:25:45.108 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:45.108 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:25:45.108 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:25:45.108 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:25:45.108 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:25:45.108 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:25:45.367 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:25:45.367 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:25:45.367 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:25:45.367 11:54:42 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:25:45.367 11:54:42 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:25:45.367 fio: verification read phase will never start because write phase uses all of runtime 00:25:45.367 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:25:45.367 fio-3.35 00:25:45.367 Starting 1 process 00:25:57.559 00:25:57.559 fio_test: (groupid=0, jobs=1): err= 0: pid=76639: Thu Jul 25 11:54:52 2024 00:25:57.559 write: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(466MiB/10001msec); 0 zone resets 00:25:57.559 clat (usec): min=52, max=4030, avg=82.25, stdev=125.06 00:25:57.559 lat (usec): min=52, max=4031, avg=83.05, stdev=125.08 00:25:57.559 clat percentiles (usec): 00:25:57.559 | 1.00th=[ 59], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:25:57.559 | 30.00th=[ 74], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:25:57.559 | 70.00th=[ 77], 80.00th=[ 80], 90.00th=[ 85], 95.00th=[ 89], 00:25:57.559 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 2671], 99.95th=[ 3130], 00:25:57.559 | 99.99th=[ 3654] 00:25:57.559 bw ( KiB/s): min=44024, max=52632, per=100.00%, avg=47818.89, stdev=1685.64, samples=19 00:25:57.559 iops : min=11006, max=13158, avg=11954.68, stdev=421.40, samples=19 00:25:57.559 lat (usec) : 100=98.44%, 250=1.25%, 500=0.02%, 750=0.02%, 1000=0.02% 00:25:57.559 lat (msec) : 2=0.09%, 4=0.17%, 10=0.01% 00:25:57.559 cpu : usr=3.19%, sys=7.44%, ctx=119388, majf=0, minf=796 00:25:57.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.559 issued rwts: total=0,119381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:57.559 00:25:57.559 Run status group 0 (all jobs): 00:25:57.559 WRITE: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=466MiB (489MB), run=10001-10001msec 00:25:57.559 00:25:57.559 Disk stats (read/write): 00:25:57.559 ublkb0: ios=0/118163, merge=0/0, ticks=0/8882, in_queue=8883, util=99.10% 00:25:57.559 11:54:52 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:25:57.559 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.559 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.559 [2024-07-25 11:54:52.458253] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:57.559 [2024-07-25 11:54:52.500134] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:57.559 [2024-07-25 11:54:52.501451] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:57.559 [2024-07-25 11:54:52.507744] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:57.559 [2024-07-25 11:54:52.508077] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:57.559 [2024-07-25 11:54:52.508093] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:57.559 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.559 11:54:52 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:25:57.559 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:25:57.559 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 [2024-07-25 11:54:52.523845] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:25:57.560 request: 00:25:57.560 { 00:25:57.560 "ublk_id": 0, 00:25:57.560 "method": "ublk_stop_disk", 00:25:57.560 "req_id": 1 00:25:57.560 } 00:25:57.560 Got JSON-RPC error response 00:25:57.560 response: 00:25:57.560 { 00:25:57.560 "code": -19, 00:25:57.560 "message": "No such device" 00:25:57.560 } 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:57.560 11:54:52 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 [2024-07-25 11:54:52.539806] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:25:57.560 [2024-07-25 11:54:52.547741] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:25:57.560 [2024-07-25 11:54:52.547794] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:52 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:52 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:25:57.560 ************************************ 00:25:57.560 END TEST test_create_ublk 00:25:57.560 ************************************ 00:25:57.560 11:54:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:25:57.560 00:25:57.560 real 0m11.351s 00:25:57.560 user 0m0.734s 00:25:57.560 sys 0m0.829s 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:57.560 11:54:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 11:54:53 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:25:57.560 11:54:53 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:57.560 11:54:53 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:57.560 11:54:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 ************************************ 00:25:57.560 START TEST test_create_multi_ublk 00:25:57.560 ************************************ 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 [2024-07-25 11:54:53.031729] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:57.560 [2024-07-25 11:54:53.034056] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 [2024-07-25 11:54:53.280006] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:25:57.560 [2024-07-25 11:54:53.280604] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:25:57.560 [2024-07-25 11:54:53.280627] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:57.560 [2024-07-25 11:54:53.280638] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:57.560 [2024-07-25 11:54:53.288074] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:57.560 [2024-07-25 11:54:53.288120] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:57.560 [2024-07-25 11:54:53.295742] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:57.560 [2024-07-25 11:54:53.296516] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:57.560 [2024-07-25 11:54:53.306830] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.560 [2024-07-25 11:54:53.569920] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:25:57.560 [2024-07-25 11:54:53.570423] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:25:57.560 [2024-07-25 11:54:53.570449] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:25:57.560 [2024-07-25 11:54:53.570463] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:25:57.560 [2024-07-25 11:54:53.578955] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:57.560 [2024-07-25 11:54:53.579119] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:57.560 [2024-07-25 11:54:53.585747] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:57.560 [2024-07-25 11:54:53.586486] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:25:57.560 [2024-07-25 11:54:53.594818] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.560 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.561 [2024-07-25 11:54:53.857928] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:25:57.561 [2024-07-25 11:54:53.858423] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:25:57.561 [2024-07-25 11:54:53.858446] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:25:57.561 [2024-07-25 11:54:53.858456] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:25:57.561 [2024-07-25 11:54:53.865800] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:57.561 [2024-07-25 11:54:53.866040] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:57.561 [2024-07-25 11:54:53.873787] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:57.561 [2024-07-25 11:54:53.874570] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:25:57.561 [2024-07-25 11:54:53.882809] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.561 11:54:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.561 [2024-07-25 11:54:54.145892] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:25:57.561 [2024-07-25 11:54:54.146395] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:25:57.561 [2024-07-25 11:54:54.146421] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:25:57.561 [2024-07-25 11:54:54.146435] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:25:57.561 [2024-07-25 11:54:54.154957] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:57.561 [2024-07-25 11:54:54.155044] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:57.561 [2024-07-25 11:54:54.161733] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:57.561 [2024-07-25 11:54:54.162484] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:25:57.561 [2024-07-25 11:54:54.170776] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:25:57.561 { 00:25:57.561 "ublk_device": "/dev/ublkb0", 00:25:57.561 "id": 0, 00:25:57.561 "queue_depth": 512, 00:25:57.561 "num_queues": 4, 00:25:57.561 "bdev_name": "Malloc0" 00:25:57.561 }, 00:25:57.561 { 00:25:57.561 "ublk_device": "/dev/ublkb1", 00:25:57.561 "id": 1, 00:25:57.561 "queue_depth": 512, 00:25:57.561 "num_queues": 4, 00:25:57.561 "bdev_name": "Malloc1" 00:25:57.561 }, 00:25:57.561 { 00:25:57.561 "ublk_device": "/dev/ublkb2", 00:25:57.561 "id": 2, 00:25:57.561 "queue_depth": 512, 00:25:57.561 "num_queues": 4, 00:25:57.561 "bdev_name": "Malloc2" 00:25:57.561 }, 00:25:57.561 { 00:25:57.561 "ublk_device": "/dev/ublkb3", 00:25:57.561 "id": 3, 00:25:57.561 "queue_depth": 512, 00:25:57.561 "num_queues": 4, 00:25:57.561 "bdev_name": "Malloc3" 00:25:57.561 } 00:25:57.561 ]' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:25:57.561 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:25:57.818 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:25:58.076 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:25:58.076 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:25:58.076 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:25:58.076 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:25:58.076 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:25:58.076 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:58.076 11:54:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:25:58.076 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:25:58.076 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:25:58.076 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:25:58.076 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:58.334 [2024-07-25 11:54:55.243049] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:58.334 [2024-07-25 11:54:55.274113] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:58.334 [2024-07-25 11:54:55.278051] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:58.334 [2024-07-25 11:54:55.283716] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:58.334 [2024-07-25 11:54:55.284103] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:58.334 [2024-07-25 11:54:55.284124] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:58.334 [2024-07-25 11:54:55.289955] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:58.334 [2024-07-25 11:54:55.327183] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:58.334 [2024-07-25 11:54:55.332709] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:58.334 [2024-07-25 11:54:55.344890] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:58.334 [2024-07-25 11:54:55.345245] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:58.334 [2024-07-25 11:54:55.345268] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.334 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:58.334 [2024-07-25 11:54:55.362846] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:25:58.591 [2024-07-25 11:54:55.397760] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:58.591 [2024-07-25 11:54:55.402044] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:25:58.591 [2024-07-25 11:54:55.405757] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:58.591 [2024-07-25 11:54:55.406111] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:25:58.591 [2024-07-25 11:54:55.406134] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:25:58.592 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:58.592 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:25:58.592 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.592 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:58.592 [2024-07-25 11:54:55.416877] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:25:58.592 [2024-07-25 11:54:55.455201] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:58.592 [2024-07-25 11:54:55.456619] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:25:58.592 [2024-07-25 11:54:55.460711] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:58.592 [2024-07-25 11:54:55.461077] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:25:58.592 [2024-07-25 11:54:55.461096] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:25:58.592 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.592 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:25:58.848 [2024-07-25 11:54:55.692844] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:25:58.848 [2024-07-25 11:54:55.700713] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:25:58.848 [2024-07-25 11:54:55.700770] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:58.848 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:25:58.848 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:58.848 11:54:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:58.848 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.848 11:54:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:59.105 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.105 11:54:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:59.105 11:54:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:59.105 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.106 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:59.363 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.363 11:54:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:59.363 11:54:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:25:59.363 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.363 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:59.623 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.623 11:54:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:25:59.623 11:54:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:25:59.623 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.623 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:00.188 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.188 11:54:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:26:00.188 11:54:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:00.188 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.188 11:54:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:26:00.188 ************************************ 00:26:00.188 END TEST test_create_multi_ublk 00:26:00.188 ************************************ 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:26:00.188 00:26:00.188 real 0m4.099s 00:26:00.188 user 0m1.277s 00:26:00.188 sys 0m0.172s 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:00.188 11:54:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:00.188 11:54:57 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:00.188 11:54:57 ublk -- ublk/ublk.sh@147 -- # cleanup 00:26:00.188 11:54:57 ublk -- ublk/ublk.sh@130 -- # killprocess 76594 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@950 -- # '[' -z 76594 ']' 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@954 -- # kill -0 76594 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@955 -- # uname 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76594 00:26:00.188 killing process with pid 76594 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76594' 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@969 -- # kill 76594 00:26:00.188 11:54:57 ublk -- common/autotest_common.sh@974 -- # wait 76594 00:26:01.122 [2024-07-25 11:54:58.150214] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:26:01.122 [2024-07-25 11:54:58.150283] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:26:02.497 00:26:02.497 real 0m29.414s 00:26:02.497 user 0m40.616s 00:26:02.497 sys 0m8.200s 00:26:02.497 11:54:59 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:02.497 ************************************ 00:26:02.497 END TEST ublk 00:26:02.497 ************************************ 00:26:02.497 11:54:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.497 11:54:59 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:02.497 11:54:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:02.497 11:54:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:02.497 11:54:59 -- common/autotest_common.sh@10 -- # set +x 00:26:02.497 ************************************ 00:26:02.497 START TEST ublk_recovery 00:26:02.497 ************************************ 00:26:02.497 11:54:59 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:02.497 * Looking for test storage... 00:26:02.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:26:02.497 11:54:59 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:26:02.497 11:54:59 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:26:02.497 11:54:59 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:26:02.497 11:54:59 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76982 00:26:02.497 11:54:59 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:02.497 11:54:59 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76982 00:26:02.497 11:54:59 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:02.497 11:54:59 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 76982 ']' 00:26:02.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.497 11:54:59 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.497 11:54:59 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.497 11:54:59 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.497 11:54:59 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.497 11:54:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.756 [2024-07-25 11:54:59.536782] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:02.756 [2024-07-25 11:54:59.537019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76982 ] 00:26:02.756 [2024-07-25 11:54:59.732942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:03.014 [2024-07-25 11:54:59.965775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.014 [2024-07-25 11:54:59.965778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:26:03.949 11:55:00 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.949 [2024-07-25 11:55:00.679728] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:03.949 [2024-07-25 11:55:00.682289] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.949 11:55:00 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.949 malloc0 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.949 11:55:00 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.949 [2024-07-25 11:55:00.815953] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:26:03.949 [2024-07-25 11:55:00.816125] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:26:03.949 [2024-07-25 11:55:00.816141] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:03.949 [2024-07-25 11:55:00.816154] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:26:03.949 [2024-07-25 11:55:00.823952] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:03.949 [2024-07-25 11:55:00.824016] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:03.949 [2024-07-25 11:55:00.831734] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:03.949 [2024-07-25 11:55:00.831956] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:26:03.949 [2024-07-25 11:55:00.849733] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:26:03.949 1 00:26:03.949 11:55:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.949 11:55:00 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:26:04.885 11:55:01 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77017 00:26:04.885 11:55:01 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:26:04.885 11:55:01 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:26:05.144 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:05.144 fio-3.35 00:26:05.144 Starting 1 process 00:26:10.405 11:55:06 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76982 00:26:10.405 11:55:06 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:26:15.666 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76982 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:26:15.666 11:55:11 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77124 00:26:15.666 11:55:11 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:15.666 11:55:11 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:15.666 11:55:11 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77124 00:26:15.666 11:55:11 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77124 ']' 00:26:15.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.666 11:55:11 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.666 11:55:11 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:15.666 11:55:11 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.666 11:55:11 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:15.666 11:55:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.666 [2024-07-25 11:55:11.990903] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:15.666 [2024-07-25 11:55:11.991333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77124 ] 00:26:15.666 [2024-07-25 11:55:12.164117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:15.666 [2024-07-25 11:55:12.359307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.666 [2024-07-25 11:55:12.359310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:26:16.231 11:55:13 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.231 [2024-07-25 11:55:13.073808] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:16.231 [2024-07-25 11:55:13.076404] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.231 11:55:13 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.231 malloc0 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.231 11:55:13 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.231 [2024-07-25 11:55:13.215882] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:26:16.231 [2024-07-25 11:55:13.215943] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:16.231 [2024-07-25 11:55:13.215957] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:26:16.231 [2024-07-25 11:55:13.223769] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:26:16.231 [2024-07-25 11:55:13.223800] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:26:16.231 [2024-07-25 11:55:13.223900] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:26:16.231 1 00:26:16.231 11:55:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.231 11:55:13 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77017 00:26:42.789 [2024-07-25 11:55:36.938724] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:26:42.789 [2024-07-25 11:55:36.946376] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:26:42.789 [2024-07-25 11:55:36.953001] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:26:42.789 [2024-07-25 11:55:36.953040] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:27:09.359 00:27:09.359 fio_test: (groupid=0, jobs=1): err= 0: pid=77020: Thu Jul 25 11:56:02 2024 00:27:09.360 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(2360MiB/60002msec) 00:27:09.360 slat (nsec): min=1992, max=305210, avg=6402.32, stdev=2732.42 00:27:09.360 clat (usec): min=1319, max=30095k, avg=6399.54, stdev=314476.88 00:27:09.360 lat (usec): min=1326, max=30095k, avg=6405.94, stdev=314476.87 00:27:09.360 clat percentiles (msec): 00:27:09.360 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:27:09.360 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:27:09.360 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:27:09.360 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 11], 00:27:09.360 | 99.99th=[17113] 00:27:09.360 bw ( KiB/s): min=11760, max=84296, per=100.00%, avg=79309.03, stdev=11749.93, samples=60 00:27:09.360 iops : min= 2940, max=21074, avg=19827.23, stdev=2937.51, samples=60 00:27:09.360 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(2357MiB/60002msec); 0 zone resets 00:27:09.360 slat (nsec): min=1988, max=1563.3k, avg=6416.02, stdev=3357.57 00:27:09.360 clat (usec): min=1070, max=30095k, avg=6304.91, stdev=304992.24 00:27:09.360 lat (usec): min=1080, max=30095k, avg=6311.33, stdev=304992.24 00:27:09.360 clat percentiles (msec): 00:27:09.360 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:27:09.360 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:27:09.360 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:27:09.360 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 11], 00:27:09.360 | 99.99th=[17113] 00:27:09.360 bw ( KiB/s): min=12320, max=84720, per=100.00%, avg=79223.17, stdev=11701.26, samples=60 00:27:09.360 iops : min= 3080, max=21180, avg=19805.77, stdev=2925.34, samples=60 00:27:09.360 lat (msec) : 2=0.07%, 4=94.13%, 10=5.75%, 20=0.04%, >=2000=0.01% 00:27:09.360 cpu : usr=5.54%, sys=12.11%, ctx=41018, majf=0, minf=13 00:27:09.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:27:09.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.360 issued rwts: total=604210,603446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.360 00:27:09.360 Run status group 0 (all jobs): 00:27:09.360 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=2360MiB (2475MB), run=60002-60002msec 00:27:09.360 WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=2357MiB (2472MB), run=60002-60002msec 00:27:09.360 00:27:09.360 Disk stats (read/write): 00:27:09.360 ublkb1: ios=601810/601132, merge=0/0, ticks=3807528/3679171, in_queue=7486700, util=99.93% 00:27:09.360 11:56:02 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.360 [2024-07-25 11:56:02.127552] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:27:09.360 [2024-07-25 11:56:02.161858] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:09.360 [2024-07-25 11:56:02.162318] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:27:09.360 [2024-07-25 11:56:02.169893] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:09.360 [2024-07-25 11:56:02.170089] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:27:09.360 [2024-07-25 11:56:02.170111] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.360 11:56:02 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.360 [2024-07-25 11:56:02.177849] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:27:09.360 [2024-07-25 11:56:02.185724] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:27:09.360 [2024-07-25 11:56:02.185787] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.360 11:56:02 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:27:09.360 11:56:02 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:27:09.360 11:56:02 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77124 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 77124 ']' 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 77124 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77124 00:27:09.360 killing process with pid 77124 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77124' 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@969 -- # kill 77124 00:27:09.360 11:56:02 ublk_recovery -- common/autotest_common.sh@974 -- # wait 77124 00:27:09.360 [2024-07-25 11:56:03.181494] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:27:09.360 [2024-07-25 11:56:03.181564] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:27:09.360 ************************************ 00:27:09.360 END TEST ublk_recovery 00:27:09.360 ************************************ 00:27:09.360 00:27:09.360 real 1m5.179s 00:27:09.360 user 1m52.076s 00:27:09.360 sys 0m17.932s 00:27:09.360 11:56:04 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:09.360 11:56:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.360 11:56:04 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@264 -- # timing_exit lib 00:27:09.360 11:56:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:09.360 11:56:04 -- common/autotest_common.sh@10 -- # set +x 00:27:09.360 11:56:04 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:27:09.360 11:56:04 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:09.360 11:56:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:09.360 11:56:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.360 11:56:04 -- common/autotest_common.sh@10 -- # set +x 00:27:09.360 ************************************ 00:27:09.360 START TEST ftl 00:27:09.360 ************************************ 00:27:09.360 11:56:04 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:09.360 * Looking for test storage... 00:27:09.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:09.360 11:56:04 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:09.360 11:56:04 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:09.360 11:56:04 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:09.360 11:56:04 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:09.360 11:56:04 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:09.360 11:56:04 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:09.360 11:56:04 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:09.360 11:56:04 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:09.360 11:56:04 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:09.360 11:56:04 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.360 11:56:04 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.360 11:56:04 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:09.360 11:56:04 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:09.360 11:56:04 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:09.360 11:56:04 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:09.360 11:56:04 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:09.360 11:56:04 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:09.360 11:56:04 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.360 11:56:04 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.360 11:56:04 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:09.360 11:56:04 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:09.360 11:56:04 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:09.360 11:56:04 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:09.360 11:56:04 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:09.360 11:56:04 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:09.360 11:56:04 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:09.360 11:56:04 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:09.360 11:56:04 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:09.360 11:56:04 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:09.360 11:56:04 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:09.360 11:56:04 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:27:09.360 11:56:04 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:27:09.360 11:56:04 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:27:09.360 11:56:04 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:27:09.360 11:56:04 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:09.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:09.360 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:09.360 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:09.360 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:09.361 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:09.361 11:56:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77906 00:27:09.361 11:56:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:27:09.361 11:56:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77906 00:27:09.361 11:56:05 ftl -- common/autotest_common.sh@831 -- # '[' -z 77906 ']' 00:27:09.361 11:56:05 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.361 11:56:05 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.361 11:56:05 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.361 11:56:05 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.361 11:56:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:09.361 [2024-07-25 11:56:05.264682] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:09.361 [2024-07-25 11:56:05.265056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77906 ] 00:27:09.361 [2024-07-25 11:56:05.427232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.361 [2024-07-25 11:56:05.635625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.361 11:56:06 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.361 11:56:06 ftl -- common/autotest_common.sh@864 -- # return 0 00:27:09.361 11:56:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:27:09.618 11:56:06 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:27:10.553 11:56:07 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:27:10.553 11:56:07 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@50 -- # break 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:11.486 11:56:08 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:11.744 11:56:08 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:27:11.744 11:56:08 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:27:11.744 11:56:08 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:27:11.744 11:56:08 ftl -- ftl/ftl.sh@63 -- # break 00:27:11.744 11:56:08 ftl -- ftl/ftl.sh@66 -- # killprocess 77906 00:27:11.744 11:56:08 ftl -- common/autotest_common.sh@950 -- # '[' -z 77906 ']' 00:27:11.744 11:56:08 ftl -- common/autotest_common.sh@954 -- # kill -0 77906 00:27:11.744 11:56:08 ftl -- common/autotest_common.sh@955 -- # uname 00:27:11.744 11:56:08 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:11.744 11:56:08 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77906 00:27:12.002 killing process with pid 77906 00:27:12.002 11:56:08 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:12.002 11:56:08 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:12.002 11:56:08 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77906' 00:27:12.002 11:56:08 ftl -- common/autotest_common.sh@969 -- # kill 77906 00:27:12.002 11:56:08 ftl -- common/autotest_common.sh@974 -- # wait 77906 00:27:13.927 11:56:10 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:27:13.927 11:56:10 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:13.927 11:56:10 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:13.927 11:56:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:13.927 11:56:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:13.927 ************************************ 00:27:13.927 START TEST ftl_fio_basic 00:27:13.927 ************************************ 00:27:13.927 11:56:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:13.927 * Looking for test storage... 00:27:13.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.927 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:13.927 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:27:13.927 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78046 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78046 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 78046 ']' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.186 11:56:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:14.186 [2024-07-25 11:56:11.118482] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:14.186 [2024-07-25 11:56:11.118743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78046 ] 00:27:14.444 [2024-07-25 11:56:11.305890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:14.702 [2024-07-25 11:56:11.589629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.702 [2024-07-25 11:56:11.589727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.702 [2024-07-25 11:56:11.589732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:27:15.636 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:15.894 11:56:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:16.152 11:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:16.152 { 00:27:16.152 "name": "nvme0n1", 00:27:16.152 "aliases": [ 00:27:16.152 "38dcdfad-c1c1-479c-bb35-b9536a629cda" 00:27:16.152 ], 00:27:16.152 "product_name": "NVMe disk", 00:27:16.152 "block_size": 4096, 00:27:16.152 "num_blocks": 1310720, 00:27:16.152 "uuid": "38dcdfad-c1c1-479c-bb35-b9536a629cda", 00:27:16.152 "assigned_rate_limits": { 00:27:16.152 "rw_ios_per_sec": 0, 00:27:16.152 "rw_mbytes_per_sec": 0, 00:27:16.152 "r_mbytes_per_sec": 0, 00:27:16.152 "w_mbytes_per_sec": 0 00:27:16.152 }, 00:27:16.152 "claimed": false, 00:27:16.152 "zoned": false, 00:27:16.152 "supported_io_types": { 00:27:16.152 "read": true, 00:27:16.152 "write": true, 00:27:16.152 "unmap": true, 00:27:16.153 "flush": true, 00:27:16.153 "reset": true, 00:27:16.153 "nvme_admin": true, 00:27:16.153 "nvme_io": true, 00:27:16.153 "nvme_io_md": false, 00:27:16.153 "write_zeroes": true, 00:27:16.153 "zcopy": false, 00:27:16.153 "get_zone_info": false, 00:27:16.153 "zone_management": false, 00:27:16.153 "zone_append": false, 00:27:16.153 "compare": true, 00:27:16.153 "compare_and_write": false, 00:27:16.153 "abort": true, 00:27:16.153 "seek_hole": false, 00:27:16.153 "seek_data": false, 00:27:16.153 "copy": true, 00:27:16.153 "nvme_iov_md": false 00:27:16.153 }, 00:27:16.153 "driver_specific": { 00:27:16.153 "nvme": [ 00:27:16.153 { 00:27:16.153 "pci_address": "0000:00:11.0", 00:27:16.153 "trid": { 00:27:16.153 "trtype": "PCIe", 00:27:16.153 "traddr": "0000:00:11.0" 00:27:16.153 }, 00:27:16.153 "ctrlr_data": { 00:27:16.153 "cntlid": 0, 00:27:16.153 "vendor_id": "0x1b36", 00:27:16.153 "model_number": "QEMU NVMe Ctrl", 00:27:16.153 "serial_number": "12341", 00:27:16.153 "firmware_revision": "8.0.0", 00:27:16.153 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:16.153 "oacs": { 00:27:16.153 "security": 0, 00:27:16.153 "format": 1, 00:27:16.153 "firmware": 0, 00:27:16.153 "ns_manage": 1 00:27:16.153 }, 00:27:16.153 "multi_ctrlr": false, 00:27:16.153 "ana_reporting": false 00:27:16.153 }, 00:27:16.153 "vs": { 00:27:16.153 "nvme_version": "1.4" 00:27:16.153 }, 00:27:16.153 "ns_data": { 00:27:16.153 "id": 1, 00:27:16.153 "can_share": false 00:27:16.153 } 00:27:16.153 } 00:27:16.153 ], 00:27:16.153 "mp_policy": "active_passive" 00:27:16.153 } 00:27:16.153 } 00:27:16.153 ]' 00:27:16.153 11:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:16.417 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:16.678 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:27:16.678 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:17.242 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=64052b74-1a6f-4eeb-974c-41d18c1672f5 00:27:17.242 11:56:13 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 64052b74-1a6f-4eeb-974c-41d18c1672f5 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:17.500 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:18.065 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:18.065 { 00:27:18.065 "name": "ace396d3-d81d-47ef-9c75-97f00c2cb7fe", 00:27:18.065 "aliases": [ 00:27:18.065 "lvs/nvme0n1p0" 00:27:18.065 ], 00:27:18.065 "product_name": "Logical Volume", 00:27:18.065 "block_size": 4096, 00:27:18.065 "num_blocks": 26476544, 00:27:18.065 "uuid": "ace396d3-d81d-47ef-9c75-97f00c2cb7fe", 00:27:18.065 "assigned_rate_limits": { 00:27:18.065 "rw_ios_per_sec": 0, 00:27:18.065 "rw_mbytes_per_sec": 0, 00:27:18.065 "r_mbytes_per_sec": 0, 00:27:18.065 "w_mbytes_per_sec": 0 00:27:18.065 }, 00:27:18.065 "claimed": false, 00:27:18.065 "zoned": false, 00:27:18.065 "supported_io_types": { 00:27:18.065 "read": true, 00:27:18.065 "write": true, 00:27:18.065 "unmap": true, 00:27:18.065 "flush": false, 00:27:18.065 "reset": true, 00:27:18.065 "nvme_admin": false, 00:27:18.065 "nvme_io": false, 00:27:18.065 "nvme_io_md": false, 00:27:18.065 "write_zeroes": true, 00:27:18.065 "zcopy": false, 00:27:18.065 "get_zone_info": false, 00:27:18.065 "zone_management": false, 00:27:18.065 "zone_append": false, 00:27:18.065 "compare": false, 00:27:18.065 "compare_and_write": false, 00:27:18.065 "abort": false, 00:27:18.065 "seek_hole": true, 00:27:18.065 "seek_data": true, 00:27:18.065 "copy": false, 00:27:18.065 "nvme_iov_md": false 00:27:18.065 }, 00:27:18.065 "driver_specific": { 00:27:18.065 "lvol": { 00:27:18.065 "lvol_store_uuid": "64052b74-1a6f-4eeb-974c-41d18c1672f5", 00:27:18.065 "base_bdev": "nvme0n1", 00:27:18.065 "thin_provision": true, 00:27:18.065 "num_allocated_clusters": 0, 00:27:18.065 "snapshot": false, 00:27:18.065 "clone": false, 00:27:18.065 "esnap_clone": false 00:27:18.065 } 00:27:18.065 } 00:27:18.065 } 00:27:18.065 ]' 00:27:18.065 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:27:18.066 11:56:14 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:18.630 11:56:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:18.630 11:56:15 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:18.630 11:56:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:18.630 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:18.630 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:18.630 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:18.630 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:18.631 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:18.889 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:18.889 { 00:27:18.889 "name": "ace396d3-d81d-47ef-9c75-97f00c2cb7fe", 00:27:18.889 "aliases": [ 00:27:18.889 "lvs/nvme0n1p0" 00:27:18.889 ], 00:27:18.889 "product_name": "Logical Volume", 00:27:18.889 "block_size": 4096, 00:27:18.889 "num_blocks": 26476544, 00:27:18.889 "uuid": "ace396d3-d81d-47ef-9c75-97f00c2cb7fe", 00:27:18.889 "assigned_rate_limits": { 00:27:18.889 "rw_ios_per_sec": 0, 00:27:18.889 "rw_mbytes_per_sec": 0, 00:27:18.889 "r_mbytes_per_sec": 0, 00:27:18.889 "w_mbytes_per_sec": 0 00:27:18.889 }, 00:27:18.889 "claimed": false, 00:27:18.889 "zoned": false, 00:27:18.889 "supported_io_types": { 00:27:18.889 "read": true, 00:27:18.889 "write": true, 00:27:18.889 "unmap": true, 00:27:18.889 "flush": false, 00:27:18.889 "reset": true, 00:27:18.889 "nvme_admin": false, 00:27:18.889 "nvme_io": false, 00:27:18.889 "nvme_io_md": false, 00:27:18.889 "write_zeroes": true, 00:27:18.889 "zcopy": false, 00:27:18.889 "get_zone_info": false, 00:27:18.889 "zone_management": false, 00:27:18.889 "zone_append": false, 00:27:18.889 "compare": false, 00:27:18.889 "compare_and_write": false, 00:27:18.889 "abort": false, 00:27:18.889 "seek_hole": true, 00:27:18.889 "seek_data": true, 00:27:18.889 "copy": false, 00:27:18.889 "nvme_iov_md": false 00:27:18.889 }, 00:27:18.889 "driver_specific": { 00:27:18.889 "lvol": { 00:27:18.889 "lvol_store_uuid": "64052b74-1a6f-4eeb-974c-41d18c1672f5", 00:27:18.889 "base_bdev": "nvme0n1", 00:27:18.889 "thin_provision": true, 00:27:18.889 "num_allocated_clusters": 0, 00:27:18.889 "snapshot": false, 00:27:18.889 "clone": false, 00:27:18.889 "esnap_clone": false 00:27:18.889 } 00:27:18.889 } 00:27:18.889 } 00:27:18.889 ]' 00:27:18.889 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:18.889 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:18.889 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:19.145 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:19.145 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:19.145 11:56:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:27:19.145 11:56:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:27:19.145 11:56:15 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:27:19.403 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:19.403 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ace396d3-d81d-47ef-9c75-97f00c2cb7fe 00:27:19.968 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:19.968 { 00:27:19.968 "name": "ace396d3-d81d-47ef-9c75-97f00c2cb7fe", 00:27:19.968 "aliases": [ 00:27:19.968 "lvs/nvme0n1p0" 00:27:19.968 ], 00:27:19.968 "product_name": "Logical Volume", 00:27:19.968 "block_size": 4096, 00:27:19.968 "num_blocks": 26476544, 00:27:19.968 "uuid": "ace396d3-d81d-47ef-9c75-97f00c2cb7fe", 00:27:19.968 "assigned_rate_limits": { 00:27:19.968 "rw_ios_per_sec": 0, 00:27:19.968 "rw_mbytes_per_sec": 0, 00:27:19.968 "r_mbytes_per_sec": 0, 00:27:19.968 "w_mbytes_per_sec": 0 00:27:19.968 }, 00:27:19.968 "claimed": false, 00:27:19.968 "zoned": false, 00:27:19.968 "supported_io_types": { 00:27:19.968 "read": true, 00:27:19.968 "write": true, 00:27:19.968 "unmap": true, 00:27:19.968 "flush": false, 00:27:19.968 "reset": true, 00:27:19.968 "nvme_admin": false, 00:27:19.968 "nvme_io": false, 00:27:19.968 "nvme_io_md": false, 00:27:19.968 "write_zeroes": true, 00:27:19.968 "zcopy": false, 00:27:19.968 "get_zone_info": false, 00:27:19.968 "zone_management": false, 00:27:19.968 "zone_append": false, 00:27:19.968 "compare": false, 00:27:19.968 "compare_and_write": false, 00:27:19.968 "abort": false, 00:27:19.968 "seek_hole": true, 00:27:19.968 "seek_data": true, 00:27:19.968 "copy": false, 00:27:19.968 "nvme_iov_md": false 00:27:19.968 }, 00:27:19.968 "driver_specific": { 00:27:19.968 "lvol": { 00:27:19.968 "lvol_store_uuid": "64052b74-1a6f-4eeb-974c-41d18c1672f5", 00:27:19.968 "base_bdev": "nvme0n1", 00:27:19.968 "thin_provision": true, 00:27:19.968 "num_allocated_clusters": 0, 00:27:19.968 "snapshot": false, 00:27:19.968 "clone": false, 00:27:19.969 "esnap_clone": false 00:27:19.969 } 00:27:19.969 } 00:27:19.969 } 00:27:19.969 ]' 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:27:19.969 11:56:16 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ace396d3-d81d-47ef-9c75-97f00c2cb7fe -c nvc0n1p0 --l2p_dram_limit 60 00:27:20.228 [2024-07-25 11:56:17.210852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.210916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:20.228 [2024-07-25 11:56:17.210939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:20.228 [2024-07-25 11:56:17.210955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.211048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.211070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.228 [2024-07-25 11:56:17.211083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:20.228 [2024-07-25 11:56:17.211099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.211134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:20.228 [2024-07-25 11:56:17.212237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:20.228 [2024-07-25 11:56:17.212290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.212325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.228 [2024-07-25 11:56:17.212351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:27:20.228 [2024-07-25 11:56:17.212381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.212523] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7081b098-ba16-4457-8258-58c381aba5e1 00:27:20.228 [2024-07-25 11:56:17.213613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.213646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:20.228 [2024-07-25 11:56:17.213666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:20.228 [2024-07-25 11:56:17.213679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.218930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.219175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.228 [2024-07-25 11:56:17.219421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.157 ms 00:27:20.228 [2024-07-25 11:56:17.219605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.219979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.220144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.228 [2024-07-25 11:56:17.220347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:27:20.228 [2024-07-25 11:56:17.220530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.220874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.221040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:20.228 [2024-07-25 11:56:17.221228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:20.228 [2024-07-25 11:56:17.221410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.221641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:20.228 [2024-07-25 11:56:17.226563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.226810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.228 [2024-07-25 11:56:17.226990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.946 ms 00:27:20.228 [2024-07-25 11:56:17.227194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.227436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.227616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:20.228 [2024-07-25 11:56:17.227813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:20.228 [2024-07-25 11:56:17.227980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.228234] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:20.228 [2024-07-25 11:56:17.228616] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:20.228 [2024-07-25 11:56:17.228943] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:20.228 [2024-07-25 11:56:17.229174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:20.228 [2024-07-25 11:56:17.229401] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:20.228 [2024-07-25 11:56:17.229600] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:20.228 [2024-07-25 11:56:17.229635] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:20.228 [2024-07-25 11:56:17.229664] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:20.228 [2024-07-25 11:56:17.229713] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:20.228 [2024-07-25 11:56:17.229744] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:20.228 [2024-07-25 11:56:17.229772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.229798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:20.228 [2024-07-25 11:56:17.229821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:27:20.228 [2024-07-25 11:56:17.229845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.229977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.228 [2024-07-25 11:56:17.230018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:20.228 [2024-07-25 11:56:17.230047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:20.228 [2024-07-25 11:56:17.230075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.228 [2024-07-25 11:56:17.230287] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:20.228 [2024-07-25 11:56:17.230338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:20.228 [2024-07-25 11:56:17.230366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.228 [2024-07-25 11:56:17.230409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.228 [2024-07-25 11:56:17.230434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:20.228 [2024-07-25 11:56:17.230468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:20.228 [2024-07-25 11:56:17.230493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:20.228 [2024-07-25 11:56:17.230527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:20.228 [2024-07-25 11:56:17.230553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:20.228 [2024-07-25 11:56:17.230583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.228 [2024-07-25 11:56:17.230619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:20.228 [2024-07-25 11:56:17.230660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:20.228 [2024-07-25 11:56:17.230682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.228 [2024-07-25 11:56:17.230740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:20.228 [2024-07-25 11:56:17.230766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:20.228 [2024-07-25 11:56:17.230800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.228 [2024-07-25 11:56:17.230824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:20.228 [2024-07-25 11:56:17.230859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:20.228 [2024-07-25 11:56:17.230885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.228 [2024-07-25 11:56:17.230911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:20.228 [2024-07-25 11:56:17.230933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:20.228 [2024-07-25 11:56:17.230956] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.228 [2024-07-25 11:56:17.230976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:20.228 [2024-07-25 11:56:17.231004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:20.228 [2024-07-25 11:56:17.231025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.228 [2024-07-25 11:56:17.231052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:20.228 [2024-07-25 11:56:17.231076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:20.228 [2024-07-25 11:56:17.231099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.228 [2024-07-25 11:56:17.231121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:20.228 [2024-07-25 11:56:17.231148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:20.228 [2024-07-25 11:56:17.231169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.228 [2024-07-25 11:56:17.231192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:20.228 [2024-07-25 11:56:17.231214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:20.228 [2024-07-25 11:56:17.231243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.228 [2024-07-25 11:56:17.231266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:20.228 [2024-07-25 11:56:17.231293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:20.228 [2024-07-25 11:56:17.231314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.228 [2024-07-25 11:56:17.231342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:20.228 [2024-07-25 11:56:17.231363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:20.228 [2024-07-25 11:56:17.231388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.228 [2024-07-25 11:56:17.231409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:20.229 [2024-07-25 11:56:17.231433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:20.229 [2024-07-25 11:56:17.231454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.229 [2024-07-25 11:56:17.231478] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:20.229 [2024-07-25 11:56:17.231502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:20.229 [2024-07-25 11:56:17.231568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.229 [2024-07-25 11:56:17.231595] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.229 [2024-07-25 11:56:17.231629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:20.229 [2024-07-25 11:56:17.231660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:20.229 [2024-07-25 11:56:17.231721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:20.229 [2024-07-25 11:56:17.231750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:20.229 [2024-07-25 11:56:17.231787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:20.229 [2024-07-25 11:56:17.231812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:20.229 [2024-07-25 11:56:17.231852] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:20.229 [2024-07-25 11:56:17.231884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.229 [2024-07-25 11:56:17.231927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:20.229 [2024-07-25 11:56:17.231955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:20.229 [2024-07-25 11:56:17.231985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:20.229 [2024-07-25 11:56:17.232011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:20.229 [2024-07-25 11:56:17.232038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:20.229 [2024-07-25 11:56:17.232061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:20.229 [2024-07-25 11:56:17.232089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:20.229 [2024-07-25 11:56:17.232113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:20.229 [2024-07-25 11:56:17.232138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:20.229 [2024-07-25 11:56:17.232160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:20.229 [2024-07-25 11:56:17.232189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:20.229 [2024-07-25 11:56:17.232213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:20.229 [2024-07-25 11:56:17.232240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:20.229 [2024-07-25 11:56:17.232265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:20.229 [2024-07-25 11:56:17.232293] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:20.229 [2024-07-25 11:56:17.232317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.229 [2024-07-25 11:56:17.232346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:20.229 [2024-07-25 11:56:17.232370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:20.229 [2024-07-25 11:56:17.232397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:20.229 [2024-07-25 11:56:17.232420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:20.229 [2024-07-25 11:56:17.232448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.229 [2024-07-25 11:56:17.232472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:20.229 [2024-07-25 11:56:17.232500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:27:20.229 [2024-07-25 11:56:17.232524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.229 [2024-07-25 11:56:17.232682] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:20.229 [2024-07-25 11:56:17.232742] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:24.411 [2024-07-25 11:56:20.670539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.670615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:24.411 [2024-07-25 11:56:20.670643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3437.880 ms 00:27:24.411 [2024-07-25 11:56:20.670657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.411 [2024-07-25 11:56:20.703814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.703882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:24.411 [2024-07-25 11:56:20.703907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.836 ms 00:27:24.411 [2024-07-25 11:56:20.703921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.411 [2024-07-25 11:56:20.704125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.704146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:24.411 [2024-07-25 11:56:20.704163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:24.411 [2024-07-25 11:56:20.704178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.411 [2024-07-25 11:56:20.759000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.759109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:24.411 [2024-07-25 11:56:20.759155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.718 ms 00:27:24.411 [2024-07-25 11:56:20.759181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.411 [2024-07-25 11:56:20.759302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.759335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:24.411 [2024-07-25 11:56:20.759368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:24.411 [2024-07-25 11:56:20.759392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.411 [2024-07-25 11:56:20.760059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.760115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:24.411 [2024-07-25 11:56:20.760149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:27:24.411 [2024-07-25 11:56:20.760173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.411 [2024-07-25 11:56:20.760515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.760562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:24.411 [2024-07-25 11:56:20.760594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:27:24.411 [2024-07-25 11:56:20.760619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.411 [2024-07-25 11:56:20.782234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.411 [2024-07-25 11:56:20.782302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:24.412 [2024-07-25 11:56:20.782327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.550 ms 00:27:24.412 [2024-07-25 11:56:20.782340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:20.795866] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:24.412 [2024-07-25 11:56:20.809909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:20.809992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:24.412 [2024-07-25 11:56:20.810015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.403 ms 00:27:24.412 [2024-07-25 11:56:20.810029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:20.867806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:20.867891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:24.412 [2024-07-25 11:56:20.867913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.708 ms 00:27:24.412 [2024-07-25 11:56:20.867928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:20.868208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:20.868231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:24.412 [2024-07-25 11:56:20.868263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:27:24.412 [2024-07-25 11:56:20.868282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:20.901078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:20.901161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:24.412 [2024-07-25 11:56:20.901183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.691 ms 00:27:24.412 [2024-07-25 11:56:20.901198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:20.933242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:20.933342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:24.412 [2024-07-25 11:56:20.933366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.944 ms 00:27:24.412 [2024-07-25 11:56:20.933380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:20.934204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:20.934239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:24.412 [2024-07-25 11:56:20.934255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:27:24.412 [2024-07-25 11:56:20.934269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:21.022623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:21.022767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:24.412 [2024-07-25 11:56:21.022794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.237 ms 00:27:24.412 [2024-07-25 11:56:21.022814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:21.056113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:21.056205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:24.412 [2024-07-25 11:56:21.056228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.207 ms 00:27:24.412 [2024-07-25 11:56:21.056243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:21.088655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:21.088757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:24.412 [2024-07-25 11:56:21.088782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.309 ms 00:27:24.412 [2024-07-25 11:56:21.088797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:21.122042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:21.122140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:24.412 [2024-07-25 11:56:21.122161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.161 ms 00:27:24.412 [2024-07-25 11:56:21.122176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:21.122288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:21.122310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:24.412 [2024-07-25 11:56:21.122324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:24.412 [2024-07-25 11:56:21.122341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:21.122513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.412 [2024-07-25 11:56:21.122538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:24.412 [2024-07-25 11:56:21.122552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:24.412 [2024-07-25 11:56:21.122566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.412 [2024-07-25 11:56:21.124078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3912.700 ms, result 0 00:27:24.412 { 00:27:24.412 "name": "ftl0", 00:27:24.412 "uuid": "7081b098-ba16-4457-8258-58c381aba5e1" 00:27:24.412 } 00:27:24.412 11:56:21 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:27:24.412 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:27:24.412 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:24.412 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:27:24.412 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:24.412 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:24.412 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:24.669 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:24.927 [ 00:27:24.927 { 00:27:24.927 "name": "ftl0", 00:27:24.927 "aliases": [ 00:27:24.927 "7081b098-ba16-4457-8258-58c381aba5e1" 00:27:24.927 ], 00:27:24.927 "product_name": "FTL disk", 00:27:24.927 "block_size": 4096, 00:27:24.927 "num_blocks": 20971520, 00:27:24.927 "uuid": "7081b098-ba16-4457-8258-58c381aba5e1", 00:27:24.927 "assigned_rate_limits": { 00:27:24.927 "rw_ios_per_sec": 0, 00:27:24.927 "rw_mbytes_per_sec": 0, 00:27:24.927 "r_mbytes_per_sec": 0, 00:27:24.927 "w_mbytes_per_sec": 0 00:27:24.927 }, 00:27:24.927 "claimed": false, 00:27:24.927 "zoned": false, 00:27:24.927 "supported_io_types": { 00:27:24.927 "read": true, 00:27:24.927 "write": true, 00:27:24.927 "unmap": true, 00:27:24.927 "flush": true, 00:27:24.927 "reset": false, 00:27:24.927 "nvme_admin": false, 00:27:24.927 "nvme_io": false, 00:27:24.927 "nvme_io_md": false, 00:27:24.927 "write_zeroes": true, 00:27:24.927 "zcopy": false, 00:27:24.927 "get_zone_info": false, 00:27:24.927 "zone_management": false, 00:27:24.927 "zone_append": false, 00:27:24.927 "compare": false, 00:27:24.927 "compare_and_write": false, 00:27:24.927 "abort": false, 00:27:24.927 "seek_hole": false, 00:27:24.927 "seek_data": false, 00:27:24.927 "copy": false, 00:27:24.927 "nvme_iov_md": false 00:27:24.927 }, 00:27:24.927 "driver_specific": { 00:27:24.927 "ftl": { 00:27:24.927 "base_bdev": "ace396d3-d81d-47ef-9c75-97f00c2cb7fe", 00:27:24.927 "cache": "nvc0n1p0" 00:27:24.927 } 00:27:24.927 } 00:27:24.927 } 00:27:24.927 ] 00:27:24.927 11:56:21 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:27:24.927 11:56:21 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:27:24.927 11:56:21 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:25.185 11:56:22 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:27:25.185 11:56:22 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:25.443 [2024-07-25 11:56:22.373684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.373758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.443 [2024-07-25 11:56:22.373787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:25.443 [2024-07-25 11:56:22.373800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.373851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.443 [2024-07-25 11:56:22.377193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.377235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.443 [2024-07-25 11:56:22.377251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.317 ms 00:27:25.443 [2024-07-25 11:56:22.377265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.377774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.377807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.443 [2024-07-25 11:56:22.377822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:27:25.443 [2024-07-25 11:56:22.377839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.381152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.381211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.443 [2024-07-25 11:56:22.381228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:27:25.443 [2024-07-25 11:56:22.381242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.387926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.387965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.443 [2024-07-25 11:56:22.387981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:27:25.443 [2024-07-25 11:56:22.388000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.419528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.419612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.443 [2024-07-25 11:56:22.419634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.412 ms 00:27:25.443 [2024-07-25 11:56:22.419649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.438302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.438391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.443 [2024-07-25 11:56:22.438413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.544 ms 00:27:25.443 [2024-07-25 11:56:22.438430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.438755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.438784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.443 [2024-07-25 11:56:22.438799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:27:25.443 [2024-07-25 11:56:22.438813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.443 [2024-07-25 11:56:22.470267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.443 [2024-07-25 11:56:22.470347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:25.443 [2024-07-25 11:56:22.470368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.419 ms 00:27:25.443 [2024-07-25 11:56:22.470382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.702 [2024-07-25 11:56:22.502292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.702 [2024-07-25 11:56:22.502389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:25.702 [2024-07-25 11:56:22.502411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.830 ms 00:27:25.702 [2024-07-25 11:56:22.502427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.702 [2024-07-25 11:56:22.533726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.702 [2024-07-25 11:56:22.533808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.702 [2024-07-25 11:56:22.533830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.215 ms 00:27:25.702 [2024-07-25 11:56:22.533844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.702 [2024-07-25 11:56:22.564959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.702 [2024-07-25 11:56:22.565043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.702 [2024-07-25 11:56:22.565065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.921 ms 00:27:25.702 [2024-07-25 11:56:22.565079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.702 [2024-07-25 11:56:22.565154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.702 [2024-07-25 11:56:22.565184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:25.702 [2024-07-25 11:56:22.565200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.702 [2024-07-25 11:56:22.565215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.702 [2024-07-25 11:56:22.565227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.702 [2024-07-25 11:56:22.565241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.565991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.703 [2024-07-25 11:56:22.566470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.704 [2024-07-25 11:56:22.566616] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.704 [2024-07-25 11:56:22.566631] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7081b098-ba16-4457-8258-58c381aba5e1 00:27:25.704 [2024-07-25 11:56:22.566645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:25.704 [2024-07-25 11:56:22.566659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:25.704 [2024-07-25 11:56:22.566675] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:25.704 [2024-07-25 11:56:22.566687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:25.704 [2024-07-25 11:56:22.566714] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.704 [2024-07-25 11:56:22.566727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.704 [2024-07-25 11:56:22.566740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.704 [2024-07-25 11:56:22.566751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.704 [2024-07-25 11:56:22.566763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.704 [2024-07-25 11:56:22.566775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.704 [2024-07-25 11:56:22.566789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.704 [2024-07-25 11:56:22.566802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:27:25.704 [2024-07-25 11:56:22.566815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.704 [2024-07-25 11:56:22.583599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.704 [2024-07-25 11:56:22.583669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.704 [2024-07-25 11:56:22.583710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.693 ms 00:27:25.704 [2024-07-25 11:56:22.583729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.704 [2024-07-25 11:56:22.584183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.704 [2024-07-25 11:56:22.584213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.704 [2024-07-25 11:56:22.584227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:27:25.704 [2024-07-25 11:56:22.584241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.704 [2024-07-25 11:56:22.642078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.704 [2024-07-25 11:56:22.642153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.704 [2024-07-25 11:56:22.642173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.704 [2024-07-25 11:56:22.642188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.704 [2024-07-25 11:56:22.642279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.704 [2024-07-25 11:56:22.642298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.704 [2024-07-25 11:56:22.642311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.704 [2024-07-25 11:56:22.642325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.704 [2024-07-25 11:56:22.642491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.704 [2024-07-25 11:56:22.642517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.704 [2024-07-25 11:56:22.642531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.704 [2024-07-25 11:56:22.642545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.704 [2024-07-25 11:56:22.642576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.704 [2024-07-25 11:56:22.642596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.704 [2024-07-25 11:56:22.642619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.704 [2024-07-25 11:56:22.642634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.962 [2024-07-25 11:56:22.747512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.962 [2024-07-25 11:56:22.747584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.962 [2024-07-25 11:56:22.747604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.962 [2024-07-25 11:56:22.747618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.962 [2024-07-25 11:56:22.831986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.962 [2024-07-25 11:56:22.832071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.962 [2024-07-25 11:56:22.832090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.962 [2024-07-25 11:56:22.832121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.962 [2024-07-25 11:56:22.832260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.962 [2024-07-25 11:56:22.832288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.962 [2024-07-25 11:56:22.832301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.962 [2024-07-25 11:56:22.832315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.962 [2024-07-25 11:56:22.832397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.962 [2024-07-25 11:56:22.832421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.963 [2024-07-25 11:56:22.832433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.963 [2024-07-25 11:56:22.832447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.963 [2024-07-25 11:56:22.832588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.963 [2024-07-25 11:56:22.832615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.963 [2024-07-25 11:56:22.832628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.963 [2024-07-25 11:56:22.832642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.963 [2024-07-25 11:56:22.832735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.963 [2024-07-25 11:56:22.832759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.963 [2024-07-25 11:56:22.832772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.963 [2024-07-25 11:56:22.832786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.963 [2024-07-25 11:56:22.832843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.963 [2024-07-25 11:56:22.832868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.963 [2024-07-25 11:56:22.832883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.963 [2024-07-25 11:56:22.832897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.963 [2024-07-25 11:56:22.832960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.963 [2024-07-25 11:56:22.832982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.963 [2024-07-25 11:56:22.832995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.963 [2024-07-25 11:56:22.833009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.963 [2024-07-25 11:56:22.833203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 459.511 ms, result 0 00:27:25.963 true 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78046 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 78046 ']' 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 78046 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78046 00:27:25.963 killing process with pid 78046 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78046' 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 78046 00:27:25.963 11:56:22 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 78046 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:31.226 11:56:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:31.226 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:27:31.226 fio-3.35 00:27:31.226 Starting 1 thread 00:27:36.484 00:27:36.484 test: (groupid=0, jobs=1): err= 0: pid=78286: Thu Jul 25 11:56:32 2024 00:27:36.484 read: IOPS=1142, BW=75.9MiB/s (79.6MB/s)(255MiB/3354msec) 00:27:36.484 slat (nsec): min=5917, max=53914, avg=8468.83, stdev=4028.26 00:27:36.484 clat (usec): min=317, max=692, avg=395.47, stdev=39.63 00:27:36.484 lat (usec): min=328, max=698, avg=403.93, stdev=41.16 00:27:36.484 clat percentiles (usec): 00:27:36.484 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 371], 00:27:36.484 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 379], 60.00th=[ 388], 00:27:36.484 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 445], 95.00th=[ 498], 00:27:36.484 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 586], 99.95th=[ 668], 00:27:36.484 | 99.99th=[ 693] 00:27:36.484 write: IOPS=1151, BW=76.4MiB/s (80.1MB/s)(256MiB/3350msec); 0 zone resets 00:27:36.484 slat (nsec): min=20588, max=98645, avg=25848.45, stdev=6800.53 00:27:36.484 clat (usec): min=347, max=4396, avg=427.59, stdev=80.54 00:27:36.484 lat (usec): min=379, max=4428, avg=453.44, stdev=80.97 00:27:36.484 clat percentiles (usec): 00:27:36.484 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 400], 00:27:36.484 | 30.00th=[ 400], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 416], 00:27:36.484 | 70.00th=[ 429], 80.00th=[ 453], 90.00th=[ 478], 95.00th=[ 506], 00:27:36.484 | 99.00th=[ 635], 99.50th=[ 685], 99.90th=[ 766], 99.95th=[ 1205], 00:27:36.484 | 99.99th=[ 4424] 00:27:36.484 bw ( KiB/s): min=74392, max=80784, per=99.68%, avg=78018.67, stdev=2915.18, samples=6 00:27:36.484 iops : min= 1094, max= 1188, avg=1147.33, stdev=42.87, samples=6 00:27:36.484 lat (usec) : 500=94.80%, 750=5.14%, 1000=0.03% 00:27:36.484 lat (msec) : 2=0.03%, 10=0.01% 00:27:36.484 cpu : usr=98.81%, sys=0.21%, ctx=6, majf=0, minf=1171 00:27:36.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.484 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:36.484 00:27:36.484 Run status group 0 (all jobs): 00:27:36.484 READ: bw=75.9MiB/s (79.6MB/s), 75.9MiB/s-75.9MiB/s (79.6MB/s-79.6MB/s), io=255MiB (267MB), run=3354-3354msec 00:27:36.484 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=256MiB (269MB), run=3350-3350msec 00:27:37.415 ----------------------------------------------------- 00:27:37.415 Suppressions used: 00:27:37.415 count bytes template 00:27:37.415 1 5 /usr/src/fio/parse.c 00:27:37.415 1 8 libtcmalloc_minimal.so 00:27:37.415 1 904 libcrypto.so 00:27:37.415 ----------------------------------------------------- 00:27:37.415 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:37.415 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:37.416 11:56:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:37.673 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:27:37.673 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:27:37.673 fio-3.35 00:27:37.673 Starting 2 threads 00:28:09.755 00:28:09.755 first_half: (groupid=0, jobs=1): err= 0: pid=78386: Thu Jul 25 11:57:04 2024 00:28:09.755 read: IOPS=2278, BW=9115KiB/s (9334kB/s)(256MiB/28733msec) 00:28:09.755 slat (nsec): min=4823, max=52443, avg=8752.11, stdev=2612.34 00:28:09.755 clat (usec): min=785, max=380541, avg=47250.81, stdev=32046.75 00:28:09.755 lat (usec): min=790, max=380565, avg=47259.56, stdev=32047.09 00:28:09.755 clat percentiles (msec): 00:28:09.755 | 1.00th=[ 11], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], 00:28:09.755 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:28:09.755 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 53], 95.00th=[ 97], 00:28:09.755 | 99.00th=[ 201], 99.50th=[ 243], 99.90th=[ 334], 99.95th=[ 351], 00:28:09.755 | 99.99th=[ 376] 00:28:09.755 write: IOPS=2284, BW=9139KiB/s (9358kB/s)(256MiB/28685msec); 0 zone resets 00:28:09.755 slat (usec): min=6, max=315, avg=10.10, stdev= 5.60 00:28:09.755 clat (usec): min=384, max=83559, avg=8866.26, stdev=9959.27 00:28:09.755 lat (usec): min=391, max=83571, avg=8876.36, stdev=9959.64 00:28:09.755 clat percentiles (usec): 00:28:09.755 | 1.00th=[ 1188], 5.00th=[ 1598], 10.00th=[ 1942], 20.00th=[ 3294], 00:28:09.755 | 30.00th=[ 4490], 40.00th=[ 5800], 50.00th=[ 6849], 60.00th=[ 7570], 00:28:09.755 | 70.00th=[ 8455], 80.00th=[10159], 90.00th=[15926], 95.00th=[24249], 00:28:09.755 | 99.00th=[61080], 99.50th=[69731], 99.90th=[81265], 99.95th=[81265], 00:28:09.755 | 99.99th=[82314] 00:28:09.755 bw ( KiB/s): min= 384, max=47032, per=100.00%, avg=20836.44, stdev=12867.55, samples=25 00:28:09.755 iops : min= 96, max=11758, avg=5209.16, stdev=3216.85, samples=25 00:28:09.755 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.15% 00:28:09.755 lat (msec) : 2=5.25%, 4=7.55%, 10=27.00%, 20=8.60%, 50=44.57% 00:28:09.755 lat (msec) : 100=4.39%, 250=2.23%, 500=0.20% 00:28:09.755 cpu : usr=98.95%, sys=0.17%, ctx=45, majf=0, minf=5545 00:28:09.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:09.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.755 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:09.755 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:09.755 second_half: (groupid=0, jobs=1): err= 0: pid=78387: Thu Jul 25 11:57:04 2024 00:28:09.755 read: IOPS=2297, BW=9188KiB/s (9409kB/s)(256MiB/28509msec) 00:28:09.756 slat (nsec): min=4904, max=56772, avg=8726.57, stdev=2652.68 00:28:09.756 clat (msec): min=14, max=405, avg=47.79, stdev=28.44 00:28:09.756 lat (msec): min=14, max=405, avg=47.80, stdev=28.44 00:28:09.756 clat percentiles (msec): 00:28:09.756 | 1.00th=[ 36], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], 00:28:09.756 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 40], 00:28:09.756 | 70.00th=[ 44], 80.00th=[ 47], 90.00th=[ 58], 95.00th=[ 87], 00:28:09.756 | 99.00th=[ 197], 99.50th=[ 220], 99.90th=[ 271], 99.95th=[ 372], 00:28:09.756 | 99.99th=[ 401] 00:28:09.756 write: IOPS=2310, BW=9243KiB/s (9465kB/s)(256MiB/28361msec); 0 zone resets 00:28:09.756 slat (usec): min=5, max=295, avg=10.96, stdev= 6.24 00:28:09.756 clat (usec): min=460, max=62557, avg=7895.98, stdev=5225.09 00:28:09.756 lat (usec): min=478, max=62566, avg=7906.93, stdev=5225.55 00:28:09.756 clat percentiles (usec): 00:28:09.756 | 1.00th=[ 1401], 5.00th=[ 2278], 10.00th=[ 3195], 20.00th=[ 4359], 00:28:09.756 | 30.00th=[ 5473], 40.00th=[ 6063], 50.00th=[ 6849], 60.00th=[ 7373], 00:28:09.756 | 70.00th=[ 8356], 80.00th=[ 9896], 90.00th=[15139], 95.00th=[16712], 00:28:09.756 | 99.00th=[28967], 99.50th=[38011], 99.90th=[46924], 99.95th=[48497], 00:28:09.756 | 99.99th=[59507] 00:28:09.756 bw ( KiB/s): min= 144, max=41256, per=100.00%, avg=21689.00, stdev=12863.55, samples=24 00:28:09.756 iops : min= 36, max=10314, avg=5422.25, stdev=3215.89, samples=24 00:28:09.756 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.13% 00:28:09.756 lat (msec) : 2=1.69%, 4=6.59%, 10=31.63%, 20=9.18%, 50=43.52% 00:28:09.756 lat (msec) : 100=5.12%, 250=2.00%, 500=0.08% 00:28:09.756 cpu : usr=98.98%, sys=0.19%, ctx=52, majf=0, minf=5576 00:28:09.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:09.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.756 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:09.756 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:09.756 00:28:09.756 Run status group 0 (all jobs): 00:28:09.756 READ: bw=17.8MiB/s (18.7MB/s), 9115KiB/s-9188KiB/s (9334kB/s-9409kB/s), io=512MiB (536MB), run=28509-28733msec 00:28:09.756 WRITE: bw=17.8MiB/s (18.7MB/s), 9139KiB/s-9243KiB/s (9358kB/s-9465kB/s), io=512MiB (537MB), run=28361-28685msec 00:28:10.014 ----------------------------------------------------- 00:28:10.014 Suppressions used: 00:28:10.014 count bytes template 00:28:10.014 2 10 /usr/src/fio/parse.c 00:28:10.014 3 288 /usr/src/fio/iolog.c 00:28:10.014 1 8 libtcmalloc_minimal.so 00:28:10.014 1 904 libcrypto.so 00:28:10.014 ----------------------------------------------------- 00:28:10.014 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:10.014 11:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:10.273 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:28:10.273 fio-3.35 00:28:10.273 Starting 1 thread 00:28:28.345 00:28:28.345 test: (groupid=0, jobs=1): err= 0: pid=78744: Thu Jul 25 11:57:24 2024 00:28:28.345 read: IOPS=6316, BW=24.7MiB/s (25.9MB/s)(255MiB/10323msec) 00:28:28.345 slat (nsec): min=4667, max=32520, avg=6805.99, stdev=1764.13 00:28:28.345 clat (usec): min=748, max=40201, avg=20254.93, stdev=1226.39 00:28:28.345 lat (usec): min=753, max=40205, avg=20261.74, stdev=1226.42 00:28:28.345 clat percentiles (usec): 00:28:28.345 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19530], 00:28:28.345 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:28:28.345 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21103], 95.00th=[22676], 00:28:28.345 | 99.00th=[25297], 99.50th=[25822], 99.90th=[30016], 99.95th=[34866], 00:28:28.345 | 99.99th=[39060] 00:28:28.345 write: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(256MiB/5640msec); 0 zone resets 00:28:28.345 slat (usec): min=6, max=301, avg= 9.55, stdev= 4.61 00:28:28.345 clat (usec): min=667, max=60135, avg=10948.03, stdev=13745.50 00:28:28.345 lat (usec): min=674, max=60142, avg=10957.58, stdev=13745.53 00:28:28.345 clat percentiles (usec): 00:28:28.345 | 1.00th=[ 914], 5.00th=[ 1139], 10.00th=[ 1270], 20.00th=[ 1467], 00:28:28.345 | 30.00th=[ 1680], 40.00th=[ 2311], 50.00th=[ 7111], 60.00th=[ 8225], 00:28:28.345 | 70.00th=[ 9634], 80.00th=[11731], 90.00th=[39060], 95.00th=[43254], 00:28:28.345 | 99.00th=[48497], 99.50th=[50594], 99.90th=[55837], 99.95th=[57410], 00:28:28.345 | 99.99th=[59507] 00:28:28.345 bw ( KiB/s): min=11448, max=65832, per=94.00%, avg=43690.67, stdev=13612.38, samples=12 00:28:28.345 iops : min= 2862, max=16458, avg=10922.67, stdev=3403.10, samples=12 00:28:28.345 lat (usec) : 750=0.02%, 1000=1.00% 00:28:28.345 lat (msec) : 2=17.92%, 4=1.95%, 10=15.62%, 20=30.02%, 50=33.18% 00:28:28.345 lat (msec) : 100=0.29% 00:28:28.345 cpu : usr=99.00%, sys=0.21%, ctx=29, majf=0, minf=5567 00:28:28.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:28.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.345 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:28.345 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:28.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:28.345 00:28:28.345 Run status group 0 (all jobs): 00:28:28.345 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=255MiB (267MB), run=10323-10323msec 00:28:28.345 WRITE: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=256MiB (268MB), run=5640-5640msec 00:28:29.282 ----------------------------------------------------- 00:28:29.282 Suppressions used: 00:28:29.282 count bytes template 00:28:29.282 1 5 /usr/src/fio/parse.c 00:28:29.282 2 192 /usr/src/fio/iolog.c 00:28:29.282 1 8 libtcmalloc_minimal.so 00:28:29.282 1 904 libcrypto.so 00:28:29.282 ----------------------------------------------------- 00:28:29.282 00:28:29.282 11:57:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:28:29.282 11:57:25 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.282 11:57:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 11:57:26 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:29.282 11:57:26 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:28:29.282 11:57:26 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:29.282 Remove shared memory files 00:28:29.282 11:57:26 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:28:29.282 11:57:26 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:28:29.282 11:57:26 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62222 /dev/shm/spdk_tgt_trace.pid76982 00:28:29.282 11:57:26 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:29.283 11:57:26 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:28:29.283 ************************************ 00:28:29.283 END TEST ftl_fio_basic 00:28:29.283 ************************************ 00:28:29.283 00:28:29.283 real 1m15.139s 00:28:29.283 user 2m49.148s 00:28:29.283 sys 0m3.870s 00:28:29.283 11:57:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:29.283 11:57:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:29.283 11:57:26 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:29.283 11:57:26 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:29.283 11:57:26 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:29.283 11:57:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:29.283 ************************************ 00:28:29.283 START TEST ftl_bdevperf 00:28:29.283 ************************************ 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:29.283 * Looking for test storage... 00:28:29.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78990 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 78990 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 78990 ']' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:29.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:29.283 11:57:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.283 [2024-07-25 11:57:26.255380] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:29.283 [2024-07-25 11:57:26.255541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78990 ] 00:28:29.541 [2024-07-25 11:57:26.412225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.799 [2024-07-25 11:57:26.599922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:28:30.367 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:30.626 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:30.885 { 00:28:30.885 "name": "nvme0n1", 00:28:30.885 "aliases": [ 00:28:30.885 "8cf50445-bc1e-4096-ada4-0940cb0b0ee1" 00:28:30.885 ], 00:28:30.885 "product_name": "NVMe disk", 00:28:30.885 "block_size": 4096, 00:28:30.885 "num_blocks": 1310720, 00:28:30.885 "uuid": "8cf50445-bc1e-4096-ada4-0940cb0b0ee1", 00:28:30.885 "assigned_rate_limits": { 00:28:30.885 "rw_ios_per_sec": 0, 00:28:30.885 "rw_mbytes_per_sec": 0, 00:28:30.885 "r_mbytes_per_sec": 0, 00:28:30.885 "w_mbytes_per_sec": 0 00:28:30.885 }, 00:28:30.885 "claimed": true, 00:28:30.885 "claim_type": "read_many_write_one", 00:28:30.885 "zoned": false, 00:28:30.885 "supported_io_types": { 00:28:30.885 "read": true, 00:28:30.885 "write": true, 00:28:30.885 "unmap": true, 00:28:30.885 "flush": true, 00:28:30.885 "reset": true, 00:28:30.885 "nvme_admin": true, 00:28:30.885 "nvme_io": true, 00:28:30.885 "nvme_io_md": false, 00:28:30.885 "write_zeroes": true, 00:28:30.885 "zcopy": false, 00:28:30.885 "get_zone_info": false, 00:28:30.885 "zone_management": false, 00:28:30.885 "zone_append": false, 00:28:30.885 "compare": true, 00:28:30.885 "compare_and_write": false, 00:28:30.885 "abort": true, 00:28:30.885 "seek_hole": false, 00:28:30.885 "seek_data": false, 00:28:30.885 "copy": true, 00:28:30.885 "nvme_iov_md": false 00:28:30.885 }, 00:28:30.885 "driver_specific": { 00:28:30.885 "nvme": [ 00:28:30.885 { 00:28:30.885 "pci_address": "0000:00:11.0", 00:28:30.885 "trid": { 00:28:30.885 "trtype": "PCIe", 00:28:30.885 "traddr": "0000:00:11.0" 00:28:30.885 }, 00:28:30.885 "ctrlr_data": { 00:28:30.885 "cntlid": 0, 00:28:30.885 "vendor_id": "0x1b36", 00:28:30.885 "model_number": "QEMU NVMe Ctrl", 00:28:30.885 "serial_number": "12341", 00:28:30.885 "firmware_revision": "8.0.0", 00:28:30.885 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:30.885 "oacs": { 00:28:30.885 "security": 0, 00:28:30.885 "format": 1, 00:28:30.885 "firmware": 0, 00:28:30.885 "ns_manage": 1 00:28:30.885 }, 00:28:30.885 "multi_ctrlr": false, 00:28:30.885 "ana_reporting": false 00:28:30.885 }, 00:28:30.885 "vs": { 00:28:30.885 "nvme_version": "1.4" 00:28:30.885 }, 00:28:30.885 "ns_data": { 00:28:30.885 "id": 1, 00:28:30.885 "can_share": false 00:28:30.885 } 00:28:30.885 } 00:28:30.885 ], 00:28:30.885 "mp_policy": "active_passive" 00:28:30.885 } 00:28:30.885 } 00:28:30.885 ]' 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:30.885 11:57:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:31.143 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=64052b74-1a6f-4eeb-974c-41d18c1672f5 00:28:31.143 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:28:31.143 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64052b74-1a6f-4eeb-974c-41d18c1672f5 00:28:31.709 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:31.709 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=0e274a85-9f51-4f6d-80f6-4cea9802f9e8 00:28:31.709 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0e274a85-9f51-4f6d-80f6-4cea9802f9e8 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=c6f40873-8718-46a6-904a-f5381acd5278 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c6f40873-8718-46a6-904a-f5381acd5278 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c6f40873-8718-46a6-904a-f5381acd5278 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c6f40873-8718-46a6-904a-f5381acd5278 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c6f40873-8718-46a6-904a-f5381acd5278 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:31.986 11:57:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6f40873-8718-46a6-904a-f5381acd5278 00:28:32.266 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:32.266 { 00:28:32.266 "name": "c6f40873-8718-46a6-904a-f5381acd5278", 00:28:32.266 "aliases": [ 00:28:32.266 "lvs/nvme0n1p0" 00:28:32.266 ], 00:28:32.266 "product_name": "Logical Volume", 00:28:32.266 "block_size": 4096, 00:28:32.266 "num_blocks": 26476544, 00:28:32.266 "uuid": "c6f40873-8718-46a6-904a-f5381acd5278", 00:28:32.266 "assigned_rate_limits": { 00:28:32.266 "rw_ios_per_sec": 0, 00:28:32.266 "rw_mbytes_per_sec": 0, 00:28:32.266 "r_mbytes_per_sec": 0, 00:28:32.266 "w_mbytes_per_sec": 0 00:28:32.266 }, 00:28:32.266 "claimed": false, 00:28:32.266 "zoned": false, 00:28:32.266 "supported_io_types": { 00:28:32.266 "read": true, 00:28:32.266 "write": true, 00:28:32.266 "unmap": true, 00:28:32.266 "flush": false, 00:28:32.266 "reset": true, 00:28:32.266 "nvme_admin": false, 00:28:32.266 "nvme_io": false, 00:28:32.266 "nvme_io_md": false, 00:28:32.266 "write_zeroes": true, 00:28:32.266 "zcopy": false, 00:28:32.266 "get_zone_info": false, 00:28:32.266 "zone_management": false, 00:28:32.266 "zone_append": false, 00:28:32.266 "compare": false, 00:28:32.266 "compare_and_write": false, 00:28:32.266 "abort": false, 00:28:32.266 "seek_hole": true, 00:28:32.266 "seek_data": true, 00:28:32.266 "copy": false, 00:28:32.266 "nvme_iov_md": false 00:28:32.266 }, 00:28:32.266 "driver_specific": { 00:28:32.266 "lvol": { 00:28:32.266 "lvol_store_uuid": "0e274a85-9f51-4f6d-80f6-4cea9802f9e8", 00:28:32.266 "base_bdev": "nvme0n1", 00:28:32.266 "thin_provision": true, 00:28:32.266 "num_allocated_clusters": 0, 00:28:32.266 "snapshot": false, 00:28:32.266 "clone": false, 00:28:32.266 "esnap_clone": false 00:28:32.266 } 00:28:32.266 } 00:28:32.266 } 00:28:32.266 ]' 00:28:32.266 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:32.266 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:32.266 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:32.524 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:32.524 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:32.524 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:28:32.524 11:57:29 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:28:32.524 11:57:29 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:28:32.524 11:57:29 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c6f40873-8718-46a6-904a-f5381acd5278 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c6f40873-8718-46a6-904a-f5381acd5278 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:32.781 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6f40873-8718-46a6-904a-f5381acd5278 00:28:33.039 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:33.039 { 00:28:33.039 "name": "c6f40873-8718-46a6-904a-f5381acd5278", 00:28:33.039 "aliases": [ 00:28:33.039 "lvs/nvme0n1p0" 00:28:33.039 ], 00:28:33.039 "product_name": "Logical Volume", 00:28:33.039 "block_size": 4096, 00:28:33.039 "num_blocks": 26476544, 00:28:33.039 "uuid": "c6f40873-8718-46a6-904a-f5381acd5278", 00:28:33.039 "assigned_rate_limits": { 00:28:33.039 "rw_ios_per_sec": 0, 00:28:33.039 "rw_mbytes_per_sec": 0, 00:28:33.039 "r_mbytes_per_sec": 0, 00:28:33.039 "w_mbytes_per_sec": 0 00:28:33.039 }, 00:28:33.039 "claimed": false, 00:28:33.039 "zoned": false, 00:28:33.039 "supported_io_types": { 00:28:33.039 "read": true, 00:28:33.039 "write": true, 00:28:33.039 "unmap": true, 00:28:33.039 "flush": false, 00:28:33.039 "reset": true, 00:28:33.039 "nvme_admin": false, 00:28:33.039 "nvme_io": false, 00:28:33.039 "nvme_io_md": false, 00:28:33.039 "write_zeroes": true, 00:28:33.039 "zcopy": false, 00:28:33.039 "get_zone_info": false, 00:28:33.039 "zone_management": false, 00:28:33.039 "zone_append": false, 00:28:33.039 "compare": false, 00:28:33.039 "compare_and_write": false, 00:28:33.039 "abort": false, 00:28:33.039 "seek_hole": true, 00:28:33.039 "seek_data": true, 00:28:33.039 "copy": false, 00:28:33.039 "nvme_iov_md": false 00:28:33.039 }, 00:28:33.039 "driver_specific": { 00:28:33.039 "lvol": { 00:28:33.039 "lvol_store_uuid": "0e274a85-9f51-4f6d-80f6-4cea9802f9e8", 00:28:33.039 "base_bdev": "nvme0n1", 00:28:33.039 "thin_provision": true, 00:28:33.039 "num_allocated_clusters": 0, 00:28:33.039 "snapshot": false, 00:28:33.039 "clone": false, 00:28:33.039 "esnap_clone": false 00:28:33.039 } 00:28:33.039 } 00:28:33.039 } 00:28:33.039 ]' 00:28:33.039 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:33.039 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:33.039 11:57:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:33.039 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:33.039 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:33.039 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:28:33.039 11:57:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:28:33.039 11:57:30 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:33.298 11:57:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:28:33.298 11:57:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size c6f40873-8718-46a6-904a-f5381acd5278 00:28:33.298 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c6f40873-8718-46a6-904a-f5381acd5278 00:28:33.298 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:33.298 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:33.298 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:33.298 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6f40873-8718-46a6-904a-f5381acd5278 00:28:33.556 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:33.556 { 00:28:33.556 "name": "c6f40873-8718-46a6-904a-f5381acd5278", 00:28:33.556 "aliases": [ 00:28:33.556 "lvs/nvme0n1p0" 00:28:33.556 ], 00:28:33.556 "product_name": "Logical Volume", 00:28:33.556 "block_size": 4096, 00:28:33.556 "num_blocks": 26476544, 00:28:33.556 "uuid": "c6f40873-8718-46a6-904a-f5381acd5278", 00:28:33.556 "assigned_rate_limits": { 00:28:33.556 "rw_ios_per_sec": 0, 00:28:33.556 "rw_mbytes_per_sec": 0, 00:28:33.556 "r_mbytes_per_sec": 0, 00:28:33.556 "w_mbytes_per_sec": 0 00:28:33.556 }, 00:28:33.556 "claimed": false, 00:28:33.556 "zoned": false, 00:28:33.556 "supported_io_types": { 00:28:33.556 "read": true, 00:28:33.556 "write": true, 00:28:33.556 "unmap": true, 00:28:33.556 "flush": false, 00:28:33.556 "reset": true, 00:28:33.556 "nvme_admin": false, 00:28:33.556 "nvme_io": false, 00:28:33.556 "nvme_io_md": false, 00:28:33.556 "write_zeroes": true, 00:28:33.556 "zcopy": false, 00:28:33.556 "get_zone_info": false, 00:28:33.556 "zone_management": false, 00:28:33.556 "zone_append": false, 00:28:33.556 "compare": false, 00:28:33.556 "compare_and_write": false, 00:28:33.556 "abort": false, 00:28:33.556 "seek_hole": true, 00:28:33.556 "seek_data": true, 00:28:33.556 "copy": false, 00:28:33.556 "nvme_iov_md": false 00:28:33.556 }, 00:28:33.556 "driver_specific": { 00:28:33.556 "lvol": { 00:28:33.556 "lvol_store_uuid": "0e274a85-9f51-4f6d-80f6-4cea9802f9e8", 00:28:33.556 "base_bdev": "nvme0n1", 00:28:33.556 "thin_provision": true, 00:28:33.556 "num_allocated_clusters": 0, 00:28:33.556 "snapshot": false, 00:28:33.556 "clone": false, 00:28:33.556 "esnap_clone": false 00:28:33.556 } 00:28:33.556 } 00:28:33.556 } 00:28:33.556 ]' 00:28:33.556 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:33.556 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:33.556 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:33.815 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:33.815 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:33.815 11:57:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:28:33.815 11:57:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:28:33.815 11:57:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c6f40873-8718-46a6-904a-f5381acd5278 -c nvc0n1p0 --l2p_dram_limit 20 00:28:33.815 [2024-07-25 11:57:30.804867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.815 [2024-07-25 11:57:30.804935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:33.815 [2024-07-25 11:57:30.804961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:33.815 [2024-07-25 11:57:30.804975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.815 [2024-07-25 11:57:30.805052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.815 [2024-07-25 11:57:30.805071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:33.815 [2024-07-25 11:57:30.805090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:33.815 [2024-07-25 11:57:30.805102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.815 [2024-07-25 11:57:30.805131] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:33.815 [2024-07-25 11:57:30.806181] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:33.815 [2024-07-25 11:57:30.806222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.815 [2024-07-25 11:57:30.806237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:33.816 [2024-07-25 11:57:30.806252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:28:33.816 [2024-07-25 11:57:30.806264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.806394] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 90b05d38-d27f-4dd5-b42f-7a14dc873793 00:28:33.816 [2024-07-25 11:57:30.807400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.807435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:33.816 [2024-07-25 11:57:30.807453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:33.816 [2024-07-25 11:57:30.807467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.812182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.812234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:33.816 [2024-07-25 11:57:30.812267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:28:33.816 [2024-07-25 11:57:30.812280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.812400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.812424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:33.816 [2024-07-25 11:57:30.812437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:33.816 [2024-07-25 11:57:30.812453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.812534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.812555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:33.816 [2024-07-25 11:57:30.812568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:33.816 [2024-07-25 11:57:30.812580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.812609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:33.816 [2024-07-25 11:57:30.817268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.817306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:33.816 [2024-07-25 11:57:30.817343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.665 ms 00:28:33.816 [2024-07-25 11:57:30.817356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.817402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.817418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:33.816 [2024-07-25 11:57:30.817432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:33.816 [2024-07-25 11:57:30.817443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.817498] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:33.816 [2024-07-25 11:57:30.817657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:33.816 [2024-07-25 11:57:30.817696] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:33.816 [2024-07-25 11:57:30.817712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:33.816 [2024-07-25 11:57:30.817752] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:33.816 [2024-07-25 11:57:30.817768] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:33.816 [2024-07-25 11:57:30.817783] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:33.816 [2024-07-25 11:57:30.817800] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:33.816 [2024-07-25 11:57:30.817815] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:33.816 [2024-07-25 11:57:30.817826] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:33.816 [2024-07-25 11:57:30.817840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.817853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:33.816 [2024-07-25 11:57:30.817870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:28:33.816 [2024-07-25 11:57:30.817882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.817977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.816 [2024-07-25 11:57:30.817993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:33.816 [2024-07-25 11:57:30.818007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:33.816 [2024-07-25 11:57:30.818018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.816 [2024-07-25 11:57:30.818121] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:33.816 [2024-07-25 11:57:30.818138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:33.816 [2024-07-25 11:57:30.818157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:33.816 [2024-07-25 11:57:30.818196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:33.816 [2024-07-25 11:57:30.818232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:33.816 [2024-07-25 11:57:30.818255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:33.816 [2024-07-25 11:57:30.818265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:33.816 [2024-07-25 11:57:30.818277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:33.816 [2024-07-25 11:57:30.818288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:33.816 [2024-07-25 11:57:30.818303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:33.816 [2024-07-25 11:57:30.818320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:33.816 [2024-07-25 11:57:30.818347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818372] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:33.816 [2024-07-25 11:57:30.818396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:33.816 [2024-07-25 11:57:30.818429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:33.816 [2024-07-25 11:57:30.818464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:33.816 [2024-07-25 11:57:30.818497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:33.816 [2024-07-25 11:57:30.818534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:33.816 [2024-07-25 11:57:30.818557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:33.816 [2024-07-25 11:57:30.818568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:33.816 [2024-07-25 11:57:30.818580] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:33.816 [2024-07-25 11:57:30.818590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:33.816 [2024-07-25 11:57:30.818617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:33.816 [2024-07-25 11:57:30.818629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:33.816 [2024-07-25 11:57:30.818652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:33.816 [2024-07-25 11:57:30.818664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818674] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:33.816 [2024-07-25 11:57:30.818687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:33.816 [2024-07-25 11:57:30.818714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:33.816 [2024-07-25 11:57:30.818728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.816 [2024-07-25 11:57:30.818755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:33.816 [2024-07-25 11:57:30.818770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:33.816 [2024-07-25 11:57:30.818781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:33.816 [2024-07-25 11:57:30.818794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:33.817 [2024-07-25 11:57:30.818805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:33.817 [2024-07-25 11:57:30.818817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:33.817 [2024-07-25 11:57:30.818833] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:33.817 [2024-07-25 11:57:30.818849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:33.817 [2024-07-25 11:57:30.818862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:33.817 [2024-07-25 11:57:30.818876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:33.817 [2024-07-25 11:57:30.818887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:33.817 [2024-07-25 11:57:30.818900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:33.817 [2024-07-25 11:57:30.818911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:33.817 [2024-07-25 11:57:30.818924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:33.817 [2024-07-25 11:57:30.818936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:33.817 [2024-07-25 11:57:30.818949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:33.817 [2024-07-25 11:57:30.818960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:33.817 [2024-07-25 11:57:30.818977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:33.817 [2024-07-25 11:57:30.818989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:33.817 [2024-07-25 11:57:30.819002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:33.817 [2024-07-25 11:57:30.819013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:33.817 [2024-07-25 11:57:30.819027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:33.817 [2024-07-25 11:57:30.819038] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:33.817 [2024-07-25 11:57:30.819052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:33.817 [2024-07-25 11:57:30.819064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:33.817 [2024-07-25 11:57:30.819077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:33.817 [2024-07-25 11:57:30.819089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:33.817 [2024-07-25 11:57:30.819102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:33.817 [2024-07-25 11:57:30.819115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.817 [2024-07-25 11:57:30.819132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:33.817 [2024-07-25 11:57:30.819144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:28:33.817 [2024-07-25 11:57:30.819157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.817 [2024-07-25 11:57:30.819206] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:33.817 [2024-07-25 11:57:30.819235] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:36.346 [2024-07-25 11:57:32.799416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.799635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:36.346 [2024-07-25 11:57:32.799792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1980.221 ms 00:28:36.346 [2024-07-25 11:57:32.799852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.838082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.838332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.346 [2024-07-25 11:57:32.838465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.874 ms 00:28:36.346 [2024-07-25 11:57:32.838594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.838855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.839003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:36.346 [2024-07-25 11:57:32.839128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:36.346 [2024-07-25 11:57:32.839187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.877501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.877733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.346 [2024-07-25 11:57:32.877884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.160 ms 00:28:36.346 [2024-07-25 11:57:32.877944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.878089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.878224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.346 [2024-07-25 11:57:32.878330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:36.346 [2024-07-25 11:57:32.878439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.878907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.879040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.346 [2024-07-25 11:57:32.879151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:28:36.346 [2024-07-25 11:57:32.879205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.879451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.879576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.346 [2024-07-25 11:57:32.879684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:28:36.346 [2024-07-25 11:57:32.879759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.896141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.896332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.346 [2024-07-25 11:57:32.896446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.277 ms 00:28:36.346 [2024-07-25 11:57:32.896580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.909950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:28:36.346 [2024-07-25 11:57:32.914801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.914838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:36.346 [2024-07-25 11:57:32.914858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.995 ms 00:28:36.346 [2024-07-25 11:57:32.914871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.979011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.979104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:36.346 [2024-07-25 11:57:32.979128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.097 ms 00:28:36.346 [2024-07-25 11:57:32.979140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:32.979343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:32.979361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:36.346 [2024-07-25 11:57:32.979378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:28:36.346 [2024-07-25 11:57:32.979389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.010031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.010102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:36.346 [2024-07-25 11:57:33.010123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.573 ms 00:28:36.346 [2024-07-25 11:57:33.010135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.040640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.040679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:36.346 [2024-07-25 11:57:33.040750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.473 ms 00:28:36.346 [2024-07-25 11:57:33.040765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.041514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.041545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:36.346 [2024-07-25 11:57:33.041562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:28:36.346 [2024-07-25 11:57:33.041573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.129266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.129330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:36.346 [2024-07-25 11:57:33.129357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.631 ms 00:28:36.346 [2024-07-25 11:57:33.129369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.162034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.162085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:36.346 [2024-07-25 11:57:33.162106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.611 ms 00:28:36.346 [2024-07-25 11:57:33.162121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.194348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.194405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:36.346 [2024-07-25 11:57:33.194427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.164 ms 00:28:36.346 [2024-07-25 11:57:33.194438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.226389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.226460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:36.346 [2024-07-25 11:57:33.226480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.897 ms 00:28:36.346 [2024-07-25 11:57:33.226492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.226547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.226566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:36.346 [2024-07-25 11:57:33.226584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:36.346 [2024-07-25 11:57:33.226596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.226770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.346 [2024-07-25 11:57:33.226804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:36.346 [2024-07-25 11:57:33.226821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:36.346 [2024-07-25 11:57:33.226835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.346 [2024-07-25 11:57:33.227920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2422.528 ms, result 0 00:28:36.346 { 00:28:36.346 "name": "ftl0", 00:28:36.346 "uuid": "90b05d38-d27f-4dd5-b42f-7a14dc873793" 00:28:36.346 } 00:28:36.346 11:57:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:28:36.346 11:57:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:28:36.346 11:57:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:28:36.604 11:57:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:28:36.604 [2024-07-25 11:57:33.636299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:36.862 I/O size of 69632 is greater than zero copy threshold (65536). 00:28:36.862 Zero copy mechanism will not be used. 00:28:36.862 Running I/O for 4 seconds... 00:28:41.063 00:28:41.063 Latency(us) 00:28:41.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.063 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:28:41.063 ftl0 : 4.00 1878.41 124.74 0.00 0.00 554.30 240.17 916.01 00:28:41.063 =================================================================================================================== 00:28:41.063 Total : 1878.41 124.74 0.00 0.00 554.30 240.17 916.01 00:28:41.063 0 00:28:41.063 [2024-07-25 11:57:37.646047] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:41.063 11:57:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:28:41.063 [2024-07-25 11:57:37.780841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:41.063 Running I/O for 4 seconds... 00:28:45.271 00:28:45.271 Latency(us) 00:28:45.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.271 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.271 ftl0 : 4.02 7349.75 28.71 0.00 0.00 17365.10 322.09 38368.35 00:28:45.271 =================================================================================================================== 00:28:45.271 Total : 7349.75 28.71 0.00 0.00 17365.10 0.00 38368.35 00:28:45.271 [2024-07-25 11:57:41.812228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:45.271 0 00:28:45.271 11:57:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:28:45.271 [2024-07-25 11:57:41.926163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:45.271 Running I/O for 4 seconds... 00:28:49.459 00:28:49.459 Latency(us) 00:28:49.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.459 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:49.459 Verification LBA range: start 0x0 length 0x1400000 00:28:49.459 ftl0 : 4.02 5678.98 22.18 0.00 0.00 22455.33 355.61 33125.47 00:28:49.459 =================================================================================================================== 00:28:49.459 Total : 5678.98 22.18 0.00 0.00 22455.33 0.00 33125.47 00:28:49.459 [2024-07-25 11:57:45.963562] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:49.459 0 00:28:49.459 11:57:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:28:49.459 [2024-07-25 11:57:46.247620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.247685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:49.459 [2024-07-25 11:57:46.247748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:49.459 [2024-07-25 11:57:46.247765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.459 [2024-07-25 11:57:46.247804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:49.459 [2024-07-25 11:57:46.251235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.251295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:49.459 [2024-07-25 11:57:46.251312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:28:49.459 [2024-07-25 11:57:46.251326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.459 [2024-07-25 11:57:46.252874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.252923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:49.459 [2024-07-25 11:57:46.252941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:28:49.459 [2024-07-25 11:57:46.252955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.459 [2024-07-25 11:57:46.430397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.430485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:49.459 [2024-07-25 11:57:46.430509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 177.416 ms 00:28:49.459 [2024-07-25 11:57:46.430527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.459 [2024-07-25 11:57:46.437897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.437941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:49.459 [2024-07-25 11:57:46.437958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.322 ms 00:28:49.459 [2024-07-25 11:57:46.437972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.459 [2024-07-25 11:57:46.469949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.470002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:49.459 [2024-07-25 11:57:46.470021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.882 ms 00:28:49.459 [2024-07-25 11:57:46.470035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.459 [2024-07-25 11:57:46.489166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.489231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:49.459 [2024-07-25 11:57:46.489268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.083 ms 00:28:49.459 [2024-07-25 11:57:46.489286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.459 [2024-07-25 11:57:46.489458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.459 [2024-07-25 11:57:46.489484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:49.459 [2024-07-25 11:57:46.489499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:28:49.459 [2024-07-25 11:57:46.489514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.719 [2024-07-25 11:57:46.520535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.719 [2024-07-25 11:57:46.520613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:49.719 [2024-07-25 11:57:46.520633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.000 ms 00:28:49.719 [2024-07-25 11:57:46.520693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.719 [2024-07-25 11:57:46.552505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.719 [2024-07-25 11:57:46.552566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:49.719 [2024-07-25 11:57:46.552583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.716 ms 00:28:49.719 [2024-07-25 11:57:46.552597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.719 [2024-07-25 11:57:46.583692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.719 [2024-07-25 11:57:46.583791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:49.719 [2024-07-25 11:57:46.583811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.051 ms 00:28:49.719 [2024-07-25 11:57:46.583825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.719 [2024-07-25 11:57:46.622934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.719 [2024-07-25 11:57:46.623078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:49.719 [2024-07-25 11:57:46.623111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.993 ms 00:28:49.719 [2024-07-25 11:57:46.623139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.719 [2024-07-25 11:57:46.623225] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:49.719 [2024-07-25 11:57:46.623270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:49.719 [2024-07-25 11:57:46.623993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.624989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:49.720 [2024-07-25 11:57:46.625649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:49.720 [2024-07-25 11:57:46.625668] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90b05d38-d27f-4dd5-b42f-7a14dc873793 00:28:49.720 [2024-07-25 11:57:46.625705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:49.720 [2024-07-25 11:57:46.625737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:49.720 [2024-07-25 11:57:46.625757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:49.720 [2024-07-25 11:57:46.625804] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:49.720 [2024-07-25 11:57:46.625833] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:49.720 [2024-07-25 11:57:46.625853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:49.720 [2024-07-25 11:57:46.625877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:49.720 [2024-07-25 11:57:46.625895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:49.720 [2024-07-25 11:57:46.625918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:49.720 [2024-07-25 11:57:46.625938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.720 [2024-07-25 11:57:46.625961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:49.720 [2024-07-25 11:57:46.625982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.716 ms 00:28:49.720 [2024-07-25 11:57:46.626003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.720 [2024-07-25 11:57:46.651912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.720 [2024-07-25 11:57:46.651989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:49.720 [2024-07-25 11:57:46.652032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:28:49.720 [2024-07-25 11:57:46.652059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.720 [2024-07-25 11:57:46.652728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.720 [2024-07-25 11:57:46.652782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:49.720 [2024-07-25 11:57:46.652808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:28:49.720 [2024-07-25 11:57:46.652829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.721 [2024-07-25 11:57:46.714273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.721 [2024-07-25 11:57:46.714359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.721 [2024-07-25 11:57:46.714391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.721 [2024-07-25 11:57:46.714419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.721 [2024-07-25 11:57:46.714531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.721 [2024-07-25 11:57:46.714562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.721 [2024-07-25 11:57:46.714588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.721 [2024-07-25 11:57:46.714623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.721 [2024-07-25 11:57:46.714807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.721 [2024-07-25 11:57:46.714848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.721 [2024-07-25 11:57:46.714869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.721 [2024-07-25 11:57:46.714892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.721 [2024-07-25 11:57:46.714930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.721 [2024-07-25 11:57:46.714956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.721 [2024-07-25 11:57:46.714974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.721 [2024-07-25 11:57:46.714992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.979 [2024-07-25 11:57:46.818219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.979 [2024-07-25 11:57:46.818299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.979 [2024-07-25 11:57:46.818320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.979 [2024-07-25 11:57:46.818338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.979 [2024-07-25 11:57:46.903272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.980 [2024-07-25 11:57:46.903336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:49.980 [2024-07-25 11:57:46.903357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.980 [2024-07-25 11:57:46.903371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.980 [2024-07-25 11:57:46.903506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.980 [2024-07-25 11:57:46.903531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:49.980 [2024-07-25 11:57:46.903548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.980 [2024-07-25 11:57:46.903562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.980 [2024-07-25 11:57:46.903624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.980 [2024-07-25 11:57:46.903646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:49.980 [2024-07-25 11:57:46.903659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.980 [2024-07-25 11:57:46.903672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.980 [2024-07-25 11:57:46.903836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.980 [2024-07-25 11:57:46.903863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:49.980 [2024-07-25 11:57:46.903877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.980 [2024-07-25 11:57:46.903896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.980 [2024-07-25 11:57:46.903949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.980 [2024-07-25 11:57:46.903971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:49.980 [2024-07-25 11:57:46.903984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.980 [2024-07-25 11:57:46.903998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.980 [2024-07-25 11:57:46.904044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.980 [2024-07-25 11:57:46.904063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:49.980 [2024-07-25 11:57:46.904075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.980 [2024-07-25 11:57:46.904088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.980 [2024-07-25 11:57:46.904145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:49.980 [2024-07-25 11:57:46.904165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:49.980 [2024-07-25 11:57:46.904178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:49.980 [2024-07-25 11:57:46.904191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.980 [2024-07-25 11:57:46.904340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 656.689 ms, result 0 00:28:49.980 true 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 78990 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 78990 ']' 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 78990 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78990 00:28:49.980 killing process with pid 78990 00:28:49.980 Received shutdown signal, test time was about 4.000000 seconds 00:28:49.980 00:28:49.980 Latency(us) 00:28:49.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.980 =================================================================================================================== 00:28:49.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78990' 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 78990 00:28:49.980 11:57:46 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 78990 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:54.169 Remove shared memory files 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:28:54.169 ************************************ 00:28:54.169 END TEST ftl_bdevperf 00:28:54.169 ************************************ 00:28:54.169 00:28:54.169 real 0m24.597s 00:28:54.169 user 0m28.188s 00:28:54.169 sys 0m1.034s 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.169 11:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:54.169 11:57:50 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:28:54.169 11:57:50 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:54.169 11:57:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.169 11:57:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:54.169 ************************************ 00:28:54.169 START TEST ftl_trim 00:28:54.169 ************************************ 00:28:54.169 11:57:50 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:28:54.169 * Looking for test storage... 00:28:54.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79349 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:54.169 11:57:50 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79349 00:28:54.169 11:57:50 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79349 ']' 00:28:54.169 11:57:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.169 11:57:50 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.169 11:57:50 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.169 11:57:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.169 11:57:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:54.169 [2024-07-25 11:57:50.926401] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:54.169 [2024-07-25 11:57:50.926779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79349 ] 00:28:54.169 [2024-07-25 11:57:51.102372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:54.427 [2024-07-25 11:57:51.340869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.427 [2024-07-25 11:57:51.340959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.427 [2024-07-25 11:57:51.340965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.360 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.360 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:28:55.360 11:57:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:55.360 11:57:52 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:28:55.360 11:57:52 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:55.360 11:57:52 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:28:55.360 11:57:52 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:28:55.361 11:57:52 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:55.619 11:57:52 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:55.619 11:57:52 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:28:55.619 11:57:52 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:55.619 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:55.619 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:55.619 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:28:55.619 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:28:55.619 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:55.877 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:55.877 { 00:28:55.877 "name": "nvme0n1", 00:28:55.877 "aliases": [ 00:28:55.877 "5c6b931f-e7aa-4d10-bc26-8e99e1afb4db" 00:28:55.877 ], 00:28:55.877 "product_name": "NVMe disk", 00:28:55.877 "block_size": 4096, 00:28:55.877 "num_blocks": 1310720, 00:28:55.877 "uuid": "5c6b931f-e7aa-4d10-bc26-8e99e1afb4db", 00:28:55.877 "assigned_rate_limits": { 00:28:55.877 "rw_ios_per_sec": 0, 00:28:55.877 "rw_mbytes_per_sec": 0, 00:28:55.877 "r_mbytes_per_sec": 0, 00:28:55.877 "w_mbytes_per_sec": 0 00:28:55.877 }, 00:28:55.877 "claimed": true, 00:28:55.877 "claim_type": "read_many_write_one", 00:28:55.877 "zoned": false, 00:28:55.877 "supported_io_types": { 00:28:55.877 "read": true, 00:28:55.877 "write": true, 00:28:55.877 "unmap": true, 00:28:55.877 "flush": true, 00:28:55.877 "reset": true, 00:28:55.877 "nvme_admin": true, 00:28:55.877 "nvme_io": true, 00:28:55.877 "nvme_io_md": false, 00:28:55.877 "write_zeroes": true, 00:28:55.877 "zcopy": false, 00:28:55.877 "get_zone_info": false, 00:28:55.877 "zone_management": false, 00:28:55.877 "zone_append": false, 00:28:55.877 "compare": true, 00:28:55.877 "compare_and_write": false, 00:28:55.877 "abort": true, 00:28:55.877 "seek_hole": false, 00:28:55.877 "seek_data": false, 00:28:55.877 "copy": true, 00:28:55.877 "nvme_iov_md": false 00:28:55.877 }, 00:28:55.877 "driver_specific": { 00:28:55.877 "nvme": [ 00:28:55.877 { 00:28:55.877 "pci_address": "0000:00:11.0", 00:28:55.877 "trid": { 00:28:55.877 "trtype": "PCIe", 00:28:55.877 "traddr": "0000:00:11.0" 00:28:55.877 }, 00:28:55.877 "ctrlr_data": { 00:28:55.877 "cntlid": 0, 00:28:55.877 "vendor_id": "0x1b36", 00:28:55.877 "model_number": "QEMU NVMe Ctrl", 00:28:55.877 "serial_number": "12341", 00:28:55.877 "firmware_revision": "8.0.0", 00:28:55.877 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:55.877 "oacs": { 00:28:55.877 "security": 0, 00:28:55.877 "format": 1, 00:28:55.877 "firmware": 0, 00:28:55.877 "ns_manage": 1 00:28:55.877 }, 00:28:55.877 "multi_ctrlr": false, 00:28:55.877 "ana_reporting": false 00:28:55.877 }, 00:28:55.877 "vs": { 00:28:55.877 "nvme_version": "1.4" 00:28:55.877 }, 00:28:55.877 "ns_data": { 00:28:55.877 "id": 1, 00:28:55.877 "can_share": false 00:28:55.877 } 00:28:55.877 } 00:28:55.877 ], 00:28:55.877 "mp_policy": "active_passive" 00:28:55.877 } 00:28:55.877 } 00:28:55.877 ]' 00:28:55.877 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:55.877 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:28:55.877 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:55.877 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:55.877 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:55.877 11:57:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:28:55.877 11:57:52 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:28:55.877 11:57:52 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:55.877 11:57:52 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:28:55.877 11:57:52 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:55.877 11:57:52 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:56.136 11:57:53 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=0e274a85-9f51-4f6d-80f6-4cea9802f9e8 00:28:56.136 11:57:53 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:28:56.136 11:57:53 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e274a85-9f51-4f6d-80f6-4cea9802f9e8 00:28:56.393 11:57:53 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:56.651 11:57:53 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=acd62709-62db-4028-967a-beaa0c526cb7 00:28:56.651 11:57:53 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u acd62709-62db-4028-967a-beaa0c526cb7 00:28:56.910 11:57:53 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:56.910 11:57:53 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:56.910 11:57:53 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:28:56.910 11:57:53 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:56.910 11:57:53 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:56.910 11:57:53 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:28:56.910 11:57:53 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:56.910 11:57:53 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:56.910 11:57:53 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:56.910 11:57:53 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:28:56.910 11:57:53 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:28:56.910 11:57:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:57.168 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:57.168 { 00:28:57.168 "name": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:28:57.168 "aliases": [ 00:28:57.168 "lvs/nvme0n1p0" 00:28:57.168 ], 00:28:57.168 "product_name": "Logical Volume", 00:28:57.168 "block_size": 4096, 00:28:57.168 "num_blocks": 26476544, 00:28:57.168 "uuid": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:28:57.168 "assigned_rate_limits": { 00:28:57.168 "rw_ios_per_sec": 0, 00:28:57.168 "rw_mbytes_per_sec": 0, 00:28:57.168 "r_mbytes_per_sec": 0, 00:28:57.168 "w_mbytes_per_sec": 0 00:28:57.168 }, 00:28:57.168 "claimed": false, 00:28:57.168 "zoned": false, 00:28:57.168 "supported_io_types": { 00:28:57.168 "read": true, 00:28:57.168 "write": true, 00:28:57.168 "unmap": true, 00:28:57.168 "flush": false, 00:28:57.168 "reset": true, 00:28:57.168 "nvme_admin": false, 00:28:57.168 "nvme_io": false, 00:28:57.168 "nvme_io_md": false, 00:28:57.168 "write_zeroes": true, 00:28:57.168 "zcopy": false, 00:28:57.168 "get_zone_info": false, 00:28:57.168 "zone_management": false, 00:28:57.168 "zone_append": false, 00:28:57.168 "compare": false, 00:28:57.168 "compare_and_write": false, 00:28:57.168 "abort": false, 00:28:57.168 "seek_hole": true, 00:28:57.168 "seek_data": true, 00:28:57.168 "copy": false, 00:28:57.168 "nvme_iov_md": false 00:28:57.168 }, 00:28:57.168 "driver_specific": { 00:28:57.168 "lvol": { 00:28:57.168 "lvol_store_uuid": "acd62709-62db-4028-967a-beaa0c526cb7", 00:28:57.168 "base_bdev": "nvme0n1", 00:28:57.168 "thin_provision": true, 00:28:57.168 "num_allocated_clusters": 0, 00:28:57.168 "snapshot": false, 00:28:57.168 "clone": false, 00:28:57.168 "esnap_clone": false 00:28:57.168 } 00:28:57.168 } 00:28:57.168 } 00:28:57.168 ]' 00:28:57.168 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:57.168 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:28:57.168 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:57.425 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:57.425 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:57.425 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:28:57.425 11:57:54 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:28:57.425 11:57:54 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:28:57.425 11:57:54 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:57.683 11:57:54 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:57.683 11:57:54 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:57.683 11:57:54 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:57.683 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:57.683 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:57.683 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:28:57.683 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:28:57.683 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:57.940 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:57.940 { 00:28:57.940 "name": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:28:57.940 "aliases": [ 00:28:57.940 "lvs/nvme0n1p0" 00:28:57.940 ], 00:28:57.940 "product_name": "Logical Volume", 00:28:57.940 "block_size": 4096, 00:28:57.940 "num_blocks": 26476544, 00:28:57.940 "uuid": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:28:57.940 "assigned_rate_limits": { 00:28:57.940 "rw_ios_per_sec": 0, 00:28:57.940 "rw_mbytes_per_sec": 0, 00:28:57.940 "r_mbytes_per_sec": 0, 00:28:57.940 "w_mbytes_per_sec": 0 00:28:57.940 }, 00:28:57.940 "claimed": false, 00:28:57.940 "zoned": false, 00:28:57.940 "supported_io_types": { 00:28:57.940 "read": true, 00:28:57.940 "write": true, 00:28:57.940 "unmap": true, 00:28:57.940 "flush": false, 00:28:57.940 "reset": true, 00:28:57.940 "nvme_admin": false, 00:28:57.940 "nvme_io": false, 00:28:57.940 "nvme_io_md": false, 00:28:57.940 "write_zeroes": true, 00:28:57.940 "zcopy": false, 00:28:57.940 "get_zone_info": false, 00:28:57.940 "zone_management": false, 00:28:57.940 "zone_append": false, 00:28:57.940 "compare": false, 00:28:57.940 "compare_and_write": false, 00:28:57.940 "abort": false, 00:28:57.940 "seek_hole": true, 00:28:57.940 "seek_data": true, 00:28:57.940 "copy": false, 00:28:57.940 "nvme_iov_md": false 00:28:57.940 }, 00:28:57.940 "driver_specific": { 00:28:57.940 "lvol": { 00:28:57.940 "lvol_store_uuid": "acd62709-62db-4028-967a-beaa0c526cb7", 00:28:57.940 "base_bdev": "nvme0n1", 00:28:57.940 "thin_provision": true, 00:28:57.940 "num_allocated_clusters": 0, 00:28:57.940 "snapshot": false, 00:28:57.940 "clone": false, 00:28:57.940 "esnap_clone": false 00:28:57.940 } 00:28:57.940 } 00:28:57.940 } 00:28:57.940 ]' 00:28:57.940 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:57.940 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:28:57.940 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:57.940 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:57.940 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:57.940 11:57:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:28:57.940 11:57:54 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:28:57.940 11:57:54 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:58.197 11:57:55 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:28:58.197 11:57:55 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:28:58.197 11:57:55 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:58.197 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:58.197 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:58.197 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:28:58.197 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:28:58.197 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb 00:28:58.454 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:58.454 { 00:28:58.454 "name": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:28:58.455 "aliases": [ 00:28:58.455 "lvs/nvme0n1p0" 00:28:58.455 ], 00:28:58.455 "product_name": "Logical Volume", 00:28:58.455 "block_size": 4096, 00:28:58.455 "num_blocks": 26476544, 00:28:58.455 "uuid": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:28:58.455 "assigned_rate_limits": { 00:28:58.455 "rw_ios_per_sec": 0, 00:28:58.455 "rw_mbytes_per_sec": 0, 00:28:58.455 "r_mbytes_per_sec": 0, 00:28:58.455 "w_mbytes_per_sec": 0 00:28:58.455 }, 00:28:58.455 "claimed": false, 00:28:58.455 "zoned": false, 00:28:58.455 "supported_io_types": { 00:28:58.455 "read": true, 00:28:58.455 "write": true, 00:28:58.455 "unmap": true, 00:28:58.455 "flush": false, 00:28:58.455 "reset": true, 00:28:58.455 "nvme_admin": false, 00:28:58.455 "nvme_io": false, 00:28:58.455 "nvme_io_md": false, 00:28:58.455 "write_zeroes": true, 00:28:58.455 "zcopy": false, 00:28:58.455 "get_zone_info": false, 00:28:58.455 "zone_management": false, 00:28:58.455 "zone_append": false, 00:28:58.455 "compare": false, 00:28:58.455 "compare_and_write": false, 00:28:58.455 "abort": false, 00:28:58.455 "seek_hole": true, 00:28:58.455 "seek_data": true, 00:28:58.455 "copy": false, 00:28:58.455 "nvme_iov_md": false 00:28:58.455 }, 00:28:58.455 "driver_specific": { 00:28:58.455 "lvol": { 00:28:58.455 "lvol_store_uuid": "acd62709-62db-4028-967a-beaa0c526cb7", 00:28:58.455 "base_bdev": "nvme0n1", 00:28:58.455 "thin_provision": true, 00:28:58.455 "num_allocated_clusters": 0, 00:28:58.455 "snapshot": false, 00:28:58.455 "clone": false, 00:28:58.455 "esnap_clone": false 00:28:58.455 } 00:28:58.455 } 00:28:58.455 } 00:28:58.455 ]' 00:28:58.455 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:58.713 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:28:58.713 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:58.713 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:58.713 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:58.713 11:57:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:28:58.713 11:57:55 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:28:58.713 11:57:55 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:28:58.972 [2024-07-25 11:57:55.811879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.972 [2024-07-25 11:57:55.811946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:58.972 [2024-07-25 11:57:55.811966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:58.972 [2024-07-25 11:57:55.811981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.972 [2024-07-25 11:57:55.815295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.972 [2024-07-25 11:57:55.815345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.972 [2024-07-25 11:57:55.815363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:28:58.972 [2024-07-25 11:57:55.815377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.972 [2024-07-25 11:57:55.815592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:58.972 [2024-07-25 11:57:55.816533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:58.972 [2024-07-25 11:57:55.816575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.972 [2024-07-25 11:57:55.816595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.972 [2024-07-25 11:57:55.816609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:28:58.972 [2024-07-25 11:57:55.816622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.972 [2024-07-25 11:57:55.816808] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d517d354-a105-4bd1-9c44-04083dc2667e 00:28:58.972 [2024-07-25 11:57:55.817828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.972 [2024-07-25 11:57:55.817868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:58.972 [2024-07-25 11:57:55.817888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:58.972 [2024-07-25 11:57:55.817900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.972 [2024-07-25 11:57:55.822661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.972 [2024-07-25 11:57:55.822757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.972 [2024-07-25 11:57:55.822794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:28:58.972 [2024-07-25 11:57:55.822813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.972 [2024-07-25 11:57:55.823084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.973 [2024-07-25 11:57:55.823119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.973 [2024-07-25 11:57:55.823148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:28:58.973 [2024-07-25 11:57:55.823168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.973 [2024-07-25 11:57:55.823252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.973 [2024-07-25 11:57:55.823284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:58.973 [2024-07-25 11:57:55.823312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:58.973 [2024-07-25 11:57:55.823332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.973 [2024-07-25 11:57:55.823400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:58.973 [2024-07-25 11:57:55.828953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.973 [2024-07-25 11:57:55.829021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.973 [2024-07-25 11:57:55.829041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.573 ms 00:28:58.973 [2024-07-25 11:57:55.829055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.973 [2024-07-25 11:57:55.829153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.973 [2024-07-25 11:57:55.829186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:58.973 [2024-07-25 11:57:55.829206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:58.973 [2024-07-25 11:57:55.829221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.973 [2024-07-25 11:57:55.829273] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:58.973 [2024-07-25 11:57:55.829439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:58.973 [2024-07-25 11:57:55.829458] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:58.973 [2024-07-25 11:57:55.829478] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:58.973 [2024-07-25 11:57:55.829493] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:58.973 [2024-07-25 11:57:55.829509] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:58.973 [2024-07-25 11:57:55.829526] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:58.973 [2024-07-25 11:57:55.829539] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:58.973 [2024-07-25 11:57:55.829550] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:58.973 [2024-07-25 11:57:55.829587] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:58.973 [2024-07-25 11:57:55.829599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.973 [2024-07-25 11:57:55.829612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:58.973 [2024-07-25 11:57:55.829625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:28:58.973 [2024-07-25 11:57:55.829638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.973 [2024-07-25 11:57:55.829774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.973 [2024-07-25 11:57:55.829807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:58.973 [2024-07-25 11:57:55.829820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:28:58.973 [2024-07-25 11:57:55.829837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.973 [2024-07-25 11:57:55.829968] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:58.973 [2024-07-25 11:57:55.829991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:58.973 [2024-07-25 11:57:55.830004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.973 [2024-07-25 11:57:55.830018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:58.973 [2024-07-25 11:57:55.830043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:58.973 [2024-07-25 11:57:55.830067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:58.973 [2024-07-25 11:57:55.830077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.973 [2024-07-25 11:57:55.830100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:58.973 [2024-07-25 11:57:55.830117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:58.973 [2024-07-25 11:57:55.830128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.973 [2024-07-25 11:57:55.830140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:58.973 [2024-07-25 11:57:55.830151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:58.973 [2024-07-25 11:57:55.830168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:58.973 [2024-07-25 11:57:55.830210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:58.973 [2024-07-25 11:57:55.830221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:58.973 [2024-07-25 11:57:55.830245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.973 [2024-07-25 11:57:55.830268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:58.973 [2024-07-25 11:57:55.830280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.973 [2024-07-25 11:57:55.830303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:58.973 [2024-07-25 11:57:55.830313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.973 [2024-07-25 11:57:55.830336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:58.973 [2024-07-25 11:57:55.830348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.973 [2024-07-25 11:57:55.830371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:58.973 [2024-07-25 11:57:55.830381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.973 [2024-07-25 11:57:55.830407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:58.973 [2024-07-25 11:57:55.830420] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:58.973 [2024-07-25 11:57:55.830434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.973 [2024-07-25 11:57:55.830448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:58.973 [2024-07-25 11:57:55.830459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:58.973 [2024-07-25 11:57:55.830471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:58.973 [2024-07-25 11:57:55.830494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:58.973 [2024-07-25 11:57:55.830504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.973 [2024-07-25 11:57:55.830516] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:58.973 [2024-07-25 11:57:55.830528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:58.973 [2024-07-25 11:57:55.830541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.974 [2024-07-25 11:57:55.830552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.974 [2024-07-25 11:57:55.830569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:58.974 [2024-07-25 11:57:55.830580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:58.974 [2024-07-25 11:57:55.830594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:58.974 [2024-07-25 11:57:55.830619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:58.974 [2024-07-25 11:57:55.830633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:58.974 [2024-07-25 11:57:55.830644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:58.974 [2024-07-25 11:57:55.830662] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:58.974 [2024-07-25 11:57:55.830677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.974 [2024-07-25 11:57:55.830990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:58.974 [2024-07-25 11:57:55.831155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:58.974 [2024-07-25 11:57:55.831336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:58.974 [2024-07-25 11:57:55.831531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:58.974 [2024-07-25 11:57:55.831742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:58.974 [2024-07-25 11:57:55.831934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:58.974 [2024-07-25 11:57:55.832131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:58.974 [2024-07-25 11:57:55.832294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:58.974 [2024-07-25 11:57:55.832463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:58.974 [2024-07-25 11:57:55.832625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:58.974 [2024-07-25 11:57:55.832819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:58.974 [2024-07-25 11:57:55.832973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:58.974 [2024-07-25 11:57:55.832997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:58.974 [2024-07-25 11:57:55.833011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:58.974 [2024-07-25 11:57:55.833024] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:58.974 [2024-07-25 11:57:55.833038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.974 [2024-07-25 11:57:55.833059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:58.974 [2024-07-25 11:57:55.833071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:58.974 [2024-07-25 11:57:55.833085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:58.974 [2024-07-25 11:57:55.833096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:58.974 [2024-07-25 11:57:55.833112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.974 [2024-07-25 11:57:55.833125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:58.974 [2024-07-25 11:57:55.833140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:28:58.974 [2024-07-25 11:57:55.833152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.974 [2024-07-25 11:57:55.833299] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:58.974 [2024-07-25 11:57:55.833322] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:00.874 [2024-07-25 11:57:57.840013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.874 [2024-07-25 11:57:57.840081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:00.874 [2024-07-25 11:57:57.840121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2006.723 ms 00:29:00.874 [2024-07-25 11:57:57.840134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.874 [2024-07-25 11:57:57.872752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.874 [2024-07-25 11:57:57.872815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:00.874 [2024-07-25 11:57:57.872840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.314 ms 00:29:00.874 [2024-07-25 11:57:57.872854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.874 [2024-07-25 11:57:57.873047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.874 [2024-07-25 11:57:57.873067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:00.874 [2024-07-25 11:57:57.873086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:00.874 [2024-07-25 11:57:57.873098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.132 [2024-07-25 11:57:57.927732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.132 [2024-07-25 11:57:57.927801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:01.133 [2024-07-25 11:57:57.927829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.585 ms 00:29:01.133 [2024-07-25 11:57:57.927844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:57.928029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:57.928054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:01.133 [2024-07-25 11:57:57.928077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:01.133 [2024-07-25 11:57:57.928092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:57.928482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:57.928505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:01.133 [2024-07-25 11:57:57.928523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:29:01.133 [2024-07-25 11:57:57.928538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:57.928762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:57.928784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:01.133 [2024-07-25 11:57:57.928803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:29:01.133 [2024-07-25 11:57:57.928817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:57.947014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:57.947059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:01.133 [2024-07-25 11:57:57.947109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.144 ms 00:29:01.133 [2024-07-25 11:57:57.947121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:57.960259] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:01.133 [2024-07-25 11:57:57.974351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:57.974435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:01.133 [2024-07-25 11:57:57.974456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.090 ms 00:29:01.133 [2024-07-25 11:57:57.974470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:58.041686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:58.041781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:01.133 [2024-07-25 11:57:58.041822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.073 ms 00:29:01.133 [2024-07-25 11:57:58.041837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:58.042116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:58.042141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:01.133 [2024-07-25 11:57:58.042155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:29:01.133 [2024-07-25 11:57:58.042185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:58.073235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:58.073282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:01.133 [2024-07-25 11:57:58.073300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.986 ms 00:29:01.133 [2024-07-25 11:57:58.073313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:58.104390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:58.104440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:01.133 [2024-07-25 11:57:58.104474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.979 ms 00:29:01.133 [2024-07-25 11:57:58.104487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.133 [2024-07-25 11:57:58.105292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.133 [2024-07-25 11:57:58.105328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:01.133 [2024-07-25 11:57:58.105343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:29:01.133 [2024-07-25 11:57:58.105357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.392 [2024-07-25 11:57:58.192527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.392 [2024-07-25 11:57:58.192590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:01.392 [2024-07-25 11:57:58.192627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.126 ms 00:29:01.392 [2024-07-25 11:57:58.192645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.392 [2024-07-25 11:57:58.225299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.392 [2024-07-25 11:57:58.225367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:01.392 [2024-07-25 11:57:58.225391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.529 ms 00:29:01.392 [2024-07-25 11:57:58.225405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.392 [2024-07-25 11:57:58.257144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.392 [2024-07-25 11:57:58.257191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:01.392 [2024-07-25 11:57:58.257225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.628 ms 00:29:01.392 [2024-07-25 11:57:58.257239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.392 [2024-07-25 11:57:58.289315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.392 [2024-07-25 11:57:58.289363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:01.392 [2024-07-25 11:57:58.289398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.982 ms 00:29:01.392 [2024-07-25 11:57:58.289412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.392 [2024-07-25 11:57:58.289517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.392 [2024-07-25 11:57:58.289543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:01.392 [2024-07-25 11:57:58.289559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:01.392 [2024-07-25 11:57:58.289575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.392 [2024-07-25 11:57:58.289678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.392 [2024-07-25 11:57:58.289698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:01.392 [2024-07-25 11:57:58.289750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:01.392 [2024-07-25 11:57:58.289796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.392 [2024-07-25 11:57:58.290805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:01.392 [2024-07-25 11:57:58.294969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2478.594 ms, result 0 00:29:01.392 [2024-07-25 11:57:58.295934] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:01.392 { 00:29:01.392 "name": "ftl0", 00:29:01.392 "uuid": "d517d354-a105-4bd1-9c44-04083dc2667e" 00:29:01.392 } 00:29:01.392 11:57:58 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:29:01.392 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:29:01.392 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:01.392 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:29:01.392 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:01.392 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:01.392 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:01.650 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:29:01.908 [ 00:29:01.908 { 00:29:01.908 "name": "ftl0", 00:29:01.908 "aliases": [ 00:29:01.908 "d517d354-a105-4bd1-9c44-04083dc2667e" 00:29:01.909 ], 00:29:01.909 "product_name": "FTL disk", 00:29:01.909 "block_size": 4096, 00:29:01.909 "num_blocks": 23592960, 00:29:01.909 "uuid": "d517d354-a105-4bd1-9c44-04083dc2667e", 00:29:01.909 "assigned_rate_limits": { 00:29:01.909 "rw_ios_per_sec": 0, 00:29:01.909 "rw_mbytes_per_sec": 0, 00:29:01.909 "r_mbytes_per_sec": 0, 00:29:01.909 "w_mbytes_per_sec": 0 00:29:01.909 }, 00:29:01.909 "claimed": false, 00:29:01.909 "zoned": false, 00:29:01.909 "supported_io_types": { 00:29:01.909 "read": true, 00:29:01.909 "write": true, 00:29:01.909 "unmap": true, 00:29:01.909 "flush": true, 00:29:01.909 "reset": false, 00:29:01.909 "nvme_admin": false, 00:29:01.909 "nvme_io": false, 00:29:01.909 "nvme_io_md": false, 00:29:01.909 "write_zeroes": true, 00:29:01.909 "zcopy": false, 00:29:01.909 "get_zone_info": false, 00:29:01.909 "zone_management": false, 00:29:01.909 "zone_append": false, 00:29:01.909 "compare": false, 00:29:01.909 "compare_and_write": false, 00:29:01.909 "abort": false, 00:29:01.909 "seek_hole": false, 00:29:01.909 "seek_data": false, 00:29:01.909 "copy": false, 00:29:01.909 "nvme_iov_md": false 00:29:01.909 }, 00:29:01.909 "driver_specific": { 00:29:01.909 "ftl": { 00:29:01.909 "base_bdev": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:29:01.909 "cache": "nvc0n1p0" 00:29:01.909 } 00:29:01.909 } 00:29:01.909 } 00:29:01.909 ] 00:29:01.909 11:57:58 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:29:01.909 11:57:58 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:29:01.909 11:57:58 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:02.167 11:57:59 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:29:02.167 11:57:59 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:29:02.425 11:57:59 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:29:02.425 { 00:29:02.425 "name": "ftl0", 00:29:02.425 "aliases": [ 00:29:02.425 "d517d354-a105-4bd1-9c44-04083dc2667e" 00:29:02.425 ], 00:29:02.425 "product_name": "FTL disk", 00:29:02.425 "block_size": 4096, 00:29:02.425 "num_blocks": 23592960, 00:29:02.425 "uuid": "d517d354-a105-4bd1-9c44-04083dc2667e", 00:29:02.425 "assigned_rate_limits": { 00:29:02.425 "rw_ios_per_sec": 0, 00:29:02.425 "rw_mbytes_per_sec": 0, 00:29:02.425 "r_mbytes_per_sec": 0, 00:29:02.425 "w_mbytes_per_sec": 0 00:29:02.425 }, 00:29:02.425 "claimed": false, 00:29:02.425 "zoned": false, 00:29:02.425 "supported_io_types": { 00:29:02.425 "read": true, 00:29:02.425 "write": true, 00:29:02.425 "unmap": true, 00:29:02.425 "flush": true, 00:29:02.425 "reset": false, 00:29:02.425 "nvme_admin": false, 00:29:02.425 "nvme_io": false, 00:29:02.425 "nvme_io_md": false, 00:29:02.425 "write_zeroes": true, 00:29:02.425 "zcopy": false, 00:29:02.425 "get_zone_info": false, 00:29:02.425 "zone_management": false, 00:29:02.425 "zone_append": false, 00:29:02.425 "compare": false, 00:29:02.425 "compare_and_write": false, 00:29:02.425 "abort": false, 00:29:02.425 "seek_hole": false, 00:29:02.425 "seek_data": false, 00:29:02.425 "copy": false, 00:29:02.425 "nvme_iov_md": false 00:29:02.425 }, 00:29:02.425 "driver_specific": { 00:29:02.425 "ftl": { 00:29:02.425 "base_bdev": "dc4304cc-7583-49f5-b65b-bf1b6a1a0cbb", 00:29:02.425 "cache": "nvc0n1p0" 00:29:02.425 } 00:29:02.425 } 00:29:02.425 } 00:29:02.425 ]' 00:29:02.425 11:57:59 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:29:02.683 11:57:59 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:29:02.683 11:57:59 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:02.942 [2024-07-25 11:57:59.752158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.752224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:02.942 [2024-07-25 11:57:59.752250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:02.942 [2024-07-25 11:57:59.752263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.752312] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:02.942 [2024-07-25 11:57:59.755714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.755752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:02.942 [2024-07-25 11:57:59.755767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:29:02.942 [2024-07-25 11:57:59.755791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.756363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.756393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:02.942 [2024-07-25 11:57:59.756408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:29:02.942 [2024-07-25 11:57:59.756427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.760160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.760195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:02.942 [2024-07-25 11:57:59.760210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:29:02.942 [2024-07-25 11:57:59.760224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.767802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.767841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:02.942 [2024-07-25 11:57:59.767857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.506 ms 00:29:02.942 [2024-07-25 11:57:59.767871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.799021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.799072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:02.942 [2024-07-25 11:57:59.799090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.069 ms 00:29:02.942 [2024-07-25 11:57:59.799107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.817839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.817897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:02.942 [2024-07-25 11:57:59.817919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.633 ms 00:29:02.942 [2024-07-25 11:57:59.817934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.818193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.818220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:02.942 [2024-07-25 11:57:59.818234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:29:02.942 [2024-07-25 11:57:59.818248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.849433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.849486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:02.942 [2024-07-25 11:57:59.849504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.148 ms 00:29:02.942 [2024-07-25 11:57:59.849528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.880579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.880629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:02.942 [2024-07-25 11:57:59.880648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.954 ms 00:29:02.942 [2024-07-25 11:57:59.880665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.911331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.911382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:02.942 [2024-07-25 11:57:59.911400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.544 ms 00:29:02.942 [2024-07-25 11:57:59.911414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.942402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.942 [2024-07-25 11:57:59.942454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:02.942 [2024-07-25 11:57:59.942472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.830 ms 00:29:02.942 [2024-07-25 11:57:59.942486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.942 [2024-07-25 11:57:59.942610] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:02.942 [2024-07-25 11:57:59.942642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:02.942 [2024-07-25 11:57:59.942658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:02.942 [2024-07-25 11:57:59.942674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:02.942 [2024-07-25 11:57:59.942686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:02.942 [2024-07-25 11:57:59.942734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:02.942 [2024-07-25 11:57:59.942748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:02.942 [2024-07-25 11:57:59.942765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:02.942 [2024-07-25 11:57:59.942777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.942996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:02.943 [2024-07-25 11:57:59.943946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:02.944 [2024-07-25 11:57:59.943958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:02.944 [2024-07-25 11:57:59.943972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:02.944 [2024-07-25 11:57:59.943984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:02.944 [2024-07-25 11:57:59.943998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:02.944 [2024-07-25 11:57:59.944010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:02.944 [2024-07-25 11:57:59.944034] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:02.944 [2024-07-25 11:57:59.944046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d517d354-a105-4bd1-9c44-04083dc2667e 00:29:02.944 [2024-07-25 11:57:59.944062] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:02.944 [2024-07-25 11:57:59.944076] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:02.944 [2024-07-25 11:57:59.944089] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:02.944 [2024-07-25 11:57:59.944101] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:02.944 [2024-07-25 11:57:59.944114] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:02.944 [2024-07-25 11:57:59.944125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:02.944 [2024-07-25 11:57:59.944138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:02.944 [2024-07-25 11:57:59.944148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:02.944 [2024-07-25 11:57:59.944160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:02.944 [2024-07-25 11:57:59.944171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.944 [2024-07-25 11:57:59.944185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:02.944 [2024-07-25 11:57:59.944198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:29:02.944 [2024-07-25 11:57:59.944211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.944 [2024-07-25 11:57:59.960847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.944 [2024-07-25 11:57:59.960892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:02.944 [2024-07-25 11:57:59.960910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.595 ms 00:29:02.944 [2024-07-25 11:57:59.960926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.944 [2024-07-25 11:57:59.961401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.944 [2024-07-25 11:57:59.961430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:02.944 [2024-07-25 11:57:59.961445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:29:02.944 [2024-07-25 11:57:59.961458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.020279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.020353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.202 [2024-07-25 11:58:00.020373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.020393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.020547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.020570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.202 [2024-07-25 11:58:00.020585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.020598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.020686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.020732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.202 [2024-07-25 11:58:00.020746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.020762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.020800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.020816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.202 [2024-07-25 11:58:00.020829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.020843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.126139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.126212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.202 [2024-07-25 11:58:00.126232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.126247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.211571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.211646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.202 [2024-07-25 11:58:00.211666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.211681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.211842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.211871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.202 [2024-07-25 11:58:00.211884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.211900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.202 [2024-07-25 11:58:00.211960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.202 [2024-07-25 11:58:00.211978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.202 [2024-07-25 11:58:00.211990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.202 [2024-07-25 11:58:00.212003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.203 [2024-07-25 11:58:00.212150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.203 [2024-07-25 11:58:00.212182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.203 [2024-07-25 11:58:00.212216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.203 [2024-07-25 11:58:00.212230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.203 [2024-07-25 11:58:00.212304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.203 [2024-07-25 11:58:00.212327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:03.203 [2024-07-25 11:58:00.212340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.203 [2024-07-25 11:58:00.212354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.203 [2024-07-25 11:58:00.212421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.203 [2024-07-25 11:58:00.212439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.203 [2024-07-25 11:58:00.212454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.203 [2024-07-25 11:58:00.212469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.203 [2024-07-25 11:58:00.212537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.203 [2024-07-25 11:58:00.212558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.203 [2024-07-25 11:58:00.212572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.203 [2024-07-25 11:58:00.212585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.203 [2024-07-25 11:58:00.212813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.645 ms, result 0 00:29:03.203 true 00:29:03.203 11:58:00 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79349 00:29:03.203 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79349 ']' 00:29:03.203 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79349 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79349 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:03.464 killing process with pid 79349 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79349' 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79349 00:29:03.464 11:58:00 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79349 00:29:08.740 11:58:04 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:29:08.999 65536+0 records in 00:29:08.999 65536+0 records out 00:29:08.999 268435456 bytes (268 MB, 256 MiB) copied, 1.2726 s, 211 MB/s 00:29:08.999 11:58:06 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:09.257 [2024-07-25 11:58:06.125640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:09.257 [2024-07-25 11:58:06.126069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79544 ] 00:29:09.516 [2024-07-25 11:58:06.297381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.516 [2024-07-25 11:58:06.482234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.775 [2024-07-25 11:58:06.800215] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:09.775 [2024-07-25 11:58:06.800324] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:10.035 [2024-07-25 11:58:06.961795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.961862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:10.035 [2024-07-25 11:58:06.961886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:10.035 [2024-07-25 11:58:06.961901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.965769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.965823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:10.035 [2024-07-25 11:58:06.965844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.830 ms 00:29:10.035 [2024-07-25 11:58:06.965859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.966024] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:10.035 [2024-07-25 11:58:06.967229] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:10.035 [2024-07-25 11:58:06.967279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.967298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:10.035 [2024-07-25 11:58:06.967313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:29:10.035 [2024-07-25 11:58:06.967327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.968661] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:10.035 [2024-07-25 11:58:06.988721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.988774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:10.035 [2024-07-25 11:58:06.988803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.061 ms 00:29:10.035 [2024-07-25 11:58:06.988817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.988964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.988991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:10.035 [2024-07-25 11:58:06.989008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:10.035 [2024-07-25 11:58:06.989022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.994066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.994139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:10.035 [2024-07-25 11:58:06.994158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.962 ms 00:29:10.035 [2024-07-25 11:58:06.994173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.994322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.994348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:10.035 [2024-07-25 11:58:06.994364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:10.035 [2024-07-25 11:58:06.994377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.994430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.994450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:10.035 [2024-07-25 11:58:06.994470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:10.035 [2024-07-25 11:58:06.994483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.994527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:10.035 [2024-07-25 11:58:06.999783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.999830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:10.035 [2024-07-25 11:58:06.999849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:29:10.035 [2024-07-25 11:58:06.999863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:06.999975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:06.999999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:10.035 [2024-07-25 11:58:07.000014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:10.035 [2024-07-25 11:58:07.000028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:07.000066] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:10.035 [2024-07-25 11:58:07.000117] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:10.035 [2024-07-25 11:58:07.000181] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:10.035 [2024-07-25 11:58:07.000207] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:10.035 [2024-07-25 11:58:07.000334] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:10.035 [2024-07-25 11:58:07.000353] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:10.035 [2024-07-25 11:58:07.000370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:10.035 [2024-07-25 11:58:07.000388] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:10.035 [2024-07-25 11:58:07.000404] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:10.035 [2024-07-25 11:58:07.000429] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:10.035 [2024-07-25 11:58:07.000443] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:10.035 [2024-07-25 11:58:07.000457] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:10.035 [2024-07-25 11:58:07.000470] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:10.035 [2024-07-25 11:58:07.000485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:07.000498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:10.035 [2024-07-25 11:58:07.000512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:29:10.035 [2024-07-25 11:58:07.000526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:07.000645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.035 [2024-07-25 11:58:07.000665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:10.035 [2024-07-25 11:58:07.000685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:10.035 [2024-07-25 11:58:07.000740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.035 [2024-07-25 11:58:07.000880] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:10.035 [2024-07-25 11:58:07.000903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:10.035 [2024-07-25 11:58:07.000919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:10.035 [2024-07-25 11:58:07.000933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.035 [2024-07-25 11:58:07.000947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:10.035 [2024-07-25 11:58:07.000960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:10.035 [2024-07-25 11:58:07.000973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:10.035 [2024-07-25 11:58:07.000987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:10.035 [2024-07-25 11:58:07.001000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:10.036 [2024-07-25 11:58:07.001025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:10.036 [2024-07-25 11:58:07.001038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:10.036 [2024-07-25 11:58:07.001051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:10.036 [2024-07-25 11:58:07.001063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:10.036 [2024-07-25 11:58:07.001086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:10.036 [2024-07-25 11:58:07.001098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:10.036 [2024-07-25 11:58:07.001133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:10.036 [2024-07-25 11:58:07.001163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:10.036 [2024-07-25 11:58:07.001199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.036 [2024-07-25 11:58:07.001224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:10.036 [2024-07-25 11:58:07.001237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.036 [2024-07-25 11:58:07.001261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:10.036 [2024-07-25 11:58:07.001274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.036 [2024-07-25 11:58:07.001299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:10.036 [2024-07-25 11:58:07.001311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.036 [2024-07-25 11:58:07.001336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:10.036 [2024-07-25 11:58:07.001349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:10.036 [2024-07-25 11:58:07.001374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:10.036 [2024-07-25 11:58:07.001387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:10.036 [2024-07-25 11:58:07.001399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:10.036 [2024-07-25 11:58:07.001411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:10.036 [2024-07-25 11:58:07.001424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:10.036 [2024-07-25 11:58:07.001436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:10.036 [2024-07-25 11:58:07.001461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:10.036 [2024-07-25 11:58:07.001474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001486] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:10.036 [2024-07-25 11:58:07.001500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:10.036 [2024-07-25 11:58:07.001514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:10.036 [2024-07-25 11:58:07.001527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.036 [2024-07-25 11:58:07.001546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:10.036 [2024-07-25 11:58:07.001559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:10.036 [2024-07-25 11:58:07.001577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:10.036 [2024-07-25 11:58:07.001590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:10.036 [2024-07-25 11:58:07.001605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:10.036 [2024-07-25 11:58:07.001618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:10.036 [2024-07-25 11:58:07.001632] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:10.036 [2024-07-25 11:58:07.001649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.036 [2024-07-25 11:58:07.001664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:10.036 [2024-07-25 11:58:07.001679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:10.036 [2024-07-25 11:58:07.001732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:10.036 [2024-07-25 11:58:07.001749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:10.036 [2024-07-25 11:58:07.001763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:10.036 [2024-07-25 11:58:07.001778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:10.036 [2024-07-25 11:58:07.001792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:10.036 [2024-07-25 11:58:07.001805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:10.036 [2024-07-25 11:58:07.001819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:10.036 [2024-07-25 11:58:07.001833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:10.036 [2024-07-25 11:58:07.001847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:10.036 [2024-07-25 11:58:07.001861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:10.036 [2024-07-25 11:58:07.001874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:10.036 [2024-07-25 11:58:07.001888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:10.036 [2024-07-25 11:58:07.001901] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:10.036 [2024-07-25 11:58:07.001916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.036 [2024-07-25 11:58:07.001933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:10.036 [2024-07-25 11:58:07.001947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:10.036 [2024-07-25 11:58:07.001961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:10.036 [2024-07-25 11:58:07.001975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:10.036 [2024-07-25 11:58:07.001990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.036 [2024-07-25 11:58:07.002005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:10.036 [2024-07-25 11:58:07.002019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:29:10.036 [2024-07-25 11:58:07.002032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.036 [2024-07-25 11:58:07.056459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.036 [2024-07-25 11:58:07.056541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:10.036 [2024-07-25 11:58:07.056590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.321 ms 00:29:10.036 [2024-07-25 11:58:07.056605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.036 [2024-07-25 11:58:07.056931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.036 [2024-07-25 11:58:07.056981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:10.036 [2024-07-25 11:58:07.057010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:29:10.036 [2024-07-25 11:58:07.057079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.104069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.104128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:10.296 [2024-07-25 11:58:07.104155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.910 ms 00:29:10.296 [2024-07-25 11:58:07.104176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.104306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.104332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:10.296 [2024-07-25 11:58:07.104348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:10.296 [2024-07-25 11:58:07.104362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.104783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.104807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:10.296 [2024-07-25 11:58:07.104822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:29:10.296 [2024-07-25 11:58:07.104835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.105051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.105082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:10.296 [2024-07-25 11:58:07.105104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:29:10.296 [2024-07-25 11:58:07.105117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.125231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.125281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:10.296 [2024-07-25 11:58:07.125309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.078 ms 00:29:10.296 [2024-07-25 11:58:07.125324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.145258] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:10.296 [2024-07-25 11:58:07.145313] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:10.296 [2024-07-25 11:58:07.145343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.145358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:10.296 [2024-07-25 11:58:07.145374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.837 ms 00:29:10.296 [2024-07-25 11:58:07.145388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.178414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.178473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:10.296 [2024-07-25 11:58:07.178491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.910 ms 00:29:10.296 [2024-07-25 11:58:07.178503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.194040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.194084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:10.296 [2024-07-25 11:58:07.194100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.364 ms 00:29:10.296 [2024-07-25 11:58:07.194111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.208891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.208931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:10.296 [2024-07-25 11:58:07.208963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.689 ms 00:29:10.296 [2024-07-25 11:58:07.208974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.209834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.209866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:10.296 [2024-07-25 11:58:07.209881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:29:10.296 [2024-07-25 11:58:07.209892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.277412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.277484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:10.296 [2024-07-25 11:58:07.277520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.485 ms 00:29:10.296 [2024-07-25 11:58:07.277532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.289277] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:10.296 [2024-07-25 11:58:07.302519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.302581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:10.296 [2024-07-25 11:58:07.302642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.849 ms 00:29:10.296 [2024-07-25 11:58:07.302656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.302854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.302888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:10.296 [2024-07-25 11:58:07.302918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:10.296 [2024-07-25 11:58:07.302938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.303036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.303065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:10.296 [2024-07-25 11:58:07.303088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:10.296 [2024-07-25 11:58:07.303109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.303167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.303194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:10.296 [2024-07-25 11:58:07.303217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:10.296 [2024-07-25 11:58:07.303248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.296 [2024-07-25 11:58:07.303318] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:10.296 [2024-07-25 11:58:07.303351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.296 [2024-07-25 11:58:07.303365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:10.296 [2024-07-25 11:58:07.303377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:10.296 [2024-07-25 11:58:07.303388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.555 [2024-07-25 11:58:07.333171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.555 [2024-07-25 11:58:07.333215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:10.555 [2024-07-25 11:58:07.333256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.749 ms 00:29:10.555 [2024-07-25 11:58:07.333268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.555 [2024-07-25 11:58:07.333411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.555 [2024-07-25 11:58:07.333449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:10.555 [2024-07-25 11:58:07.333462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:10.555 [2024-07-25 11:58:07.333474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.555 [2024-07-25 11:58:07.334458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:10.555 [2024-07-25 11:58:07.338473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.328 ms, result 0 00:29:10.555 [2024-07-25 11:58:07.339344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:10.555 [2024-07-25 11:58:07.355506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:21.058  Copying: 24/256 [MB] (24 MBps) Copying: 49/256 [MB] (25 MBps) Copying: 73/256 [MB] (24 MBps) Copying: 97/256 [MB] (24 MBps) Copying: 122/256 [MB] (24 MBps) Copying: 147/256 [MB] (24 MBps) Copying: 172/256 [MB] (24 MBps) Copying: 196/256 [MB] (24 MBps) Copying: 221/256 [MB] (24 MBps) Copying: 245/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-25 11:58:17.817176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:21.058 [2024-07-25 11:58:17.829919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.058 [2024-07-25 11:58:17.829966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:21.058 [2024-07-25 11:58:17.829996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:21.058 [2024-07-25 11:58:17.830009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.058 [2024-07-25 11:58:17.830042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:21.058 [2024-07-25 11:58:17.833334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.058 [2024-07-25 11:58:17.833376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:21.058 [2024-07-25 11:58:17.833392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:29:21.058 [2024-07-25 11:58:17.833403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.058 [2024-07-25 11:58:17.835012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.058 [2024-07-25 11:58:17.835056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:21.058 [2024-07-25 11:58:17.835074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:29:21.058 [2024-07-25 11:58:17.835100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.058 [2024-07-25 11:58:17.842420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.058 [2024-07-25 11:58:17.842462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:21.058 [2024-07-25 11:58:17.842478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.294 ms 00:29:21.058 [2024-07-25 11:58:17.842499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.058 [2024-07-25 11:58:17.850496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.058 [2024-07-25 11:58:17.850557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:21.058 [2024-07-25 11:58:17.850572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.922 ms 00:29:21.058 [2024-07-25 11:58:17.850582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.058 [2024-07-25 11:58:17.882759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.058 [2024-07-25 11:58:17.882804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:21.058 [2024-07-25 11:58:17.882821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.098 ms 00:29:21.059 [2024-07-25 11:58:17.882833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.059 [2024-07-25 11:58:17.901455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.059 [2024-07-25 11:58:17.901499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:21.059 [2024-07-25 11:58:17.901517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.555 ms 00:29:21.059 [2024-07-25 11:58:17.901528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.059 [2024-07-25 11:58:17.901771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.059 [2024-07-25 11:58:17.901795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:21.059 [2024-07-25 11:58:17.901808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:29:21.059 [2024-07-25 11:58:17.901820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.059 [2024-07-25 11:58:17.933909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.059 [2024-07-25 11:58:17.933954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:21.059 [2024-07-25 11:58:17.933972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.064 ms 00:29:21.059 [2024-07-25 11:58:17.933983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.059 [2024-07-25 11:58:17.965731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.059 [2024-07-25 11:58:17.965792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:21.059 [2024-07-25 11:58:17.965811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.680 ms 00:29:21.059 [2024-07-25 11:58:17.965823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.059 [2024-07-25 11:58:17.997289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.059 [2024-07-25 11:58:17.997334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:21.059 [2024-07-25 11:58:17.997352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.397 ms 00:29:21.059 [2024-07-25 11:58:17.997364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.059 [2024-07-25 11:58:18.029205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.059 [2024-07-25 11:58:18.029251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:21.059 [2024-07-25 11:58:18.029269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.738 ms 00:29:21.059 [2024-07-25 11:58:18.029280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.059 [2024-07-25 11:58:18.029345] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:21.059 [2024-07-25 11:58:18.029370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.029995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:21.059 [2024-07-25 11:58:18.030206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:21.060 [2024-07-25 11:58:18.030633] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:21.060 [2024-07-25 11:58:18.030645] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d517d354-a105-4bd1-9c44-04083dc2667e 00:29:21.060 [2024-07-25 11:58:18.030657] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:21.060 [2024-07-25 11:58:18.030668] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:21.060 [2024-07-25 11:58:18.030678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:21.060 [2024-07-25 11:58:18.030717] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:21.060 [2024-07-25 11:58:18.030728] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:21.060 [2024-07-25 11:58:18.030739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:21.060 [2024-07-25 11:58:18.030750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:21.060 [2024-07-25 11:58:18.030760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:21.060 [2024-07-25 11:58:18.030770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:21.060 [2024-07-25 11:58:18.030781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.060 [2024-07-25 11:58:18.030794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:21.060 [2024-07-25 11:58:18.030810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:29:21.060 [2024-07-25 11:58:18.030826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.060 [2024-07-25 11:58:18.047773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.060 [2024-07-25 11:58:18.047815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:21.060 [2024-07-25 11:58:18.047833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.919 ms 00:29:21.060 [2024-07-25 11:58:18.047845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.060 [2024-07-25 11:58:18.048292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.060 [2024-07-25 11:58:18.048316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:21.060 [2024-07-25 11:58:18.048338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:29:21.060 [2024-07-25 11:58:18.048349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.060 [2024-07-25 11:58:18.088676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.060 [2024-07-25 11:58:18.088750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:21.060 [2024-07-25 11:58:18.088769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.060 [2024-07-25 11:58:18.088781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.060 [2024-07-25 11:58:18.088907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.060 [2024-07-25 11:58:18.088925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:21.060 [2024-07-25 11:58:18.088942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.060 [2024-07-25 11:58:18.088953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.060 [2024-07-25 11:58:18.089019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.060 [2024-07-25 11:58:18.089039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:21.060 [2024-07-25 11:58:18.089052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.060 [2024-07-25 11:58:18.089064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.060 [2024-07-25 11:58:18.089090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.060 [2024-07-25 11:58:18.089105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:21.060 [2024-07-25 11:58:18.089116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.060 [2024-07-25 11:58:18.089134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.186670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.186743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:21.320 [2024-07-25 11:58:18.186762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.186775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.273083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.273182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:21.320 [2024-07-25 11:58:18.273211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.273223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.273308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.273328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:21.320 [2024-07-25 11:58:18.273340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.273352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.273399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.273415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:21.320 [2024-07-25 11:58:18.273427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.273439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.273583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.273603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:21.320 [2024-07-25 11:58:18.273615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.273626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.273677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.273696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:21.320 [2024-07-25 11:58:18.273707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.273718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.273834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.273852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:21.320 [2024-07-25 11:58:18.273865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.273876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.273932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.320 [2024-07-25 11:58:18.273950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:21.320 [2024-07-25 11:58:18.273962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.320 [2024-07-25 11:58:18.273974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.320 [2024-07-25 11:58:18.274146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.219 ms, result 0 00:29:22.693 00:29:22.693 00:29:22.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.693 11:58:19 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79680 00:29:22.693 11:58:19 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79680 00:29:22.693 11:58:19 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79680 ']' 00:29:22.693 11:58:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.693 11:58:19 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.693 11:58:19 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:29:22.693 11:58:19 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.693 11:58:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.693 11:58:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:22.693 [2024-07-25 11:58:19.632269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:22.693 [2024-07-25 11:58:19.632453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79680 ] 00:29:22.952 [2024-07-25 11:58:19.805566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.210 [2024-07-25 11:58:20.028012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.777 11:58:20 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:23.777 11:58:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:29:23.777 11:58:20 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:29:24.054 [2024-07-25 11:58:21.022246] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:24.054 [2024-07-25 11:58:21.022346] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:24.332 [2024-07-25 11:58:21.201847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.332 [2024-07-25 11:58:21.201922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:24.332 [2024-07-25 11:58:21.201942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:24.332 [2024-07-25 11:58:21.201960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.332 [2024-07-25 11:58:21.205747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.332 [2024-07-25 11:58:21.205811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:24.332 [2024-07-25 11:58:21.205828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:29:24.333 [2024-07-25 11:58:21.205846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.206019] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:24.333 [2024-07-25 11:58:21.207060] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:24.333 [2024-07-25 11:58:21.207099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.207136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:24.333 [2024-07-25 11:58:21.207164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:29:24.333 [2024-07-25 11:58:21.207187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.208413] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:24.333 [2024-07-25 11:58:21.224362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.224406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:24.333 [2024-07-25 11:58:21.224448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.939 ms 00:29:24.333 [2024-07-25 11:58:21.224462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.224608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.224630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:24.333 [2024-07-25 11:58:21.224650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:24.333 [2024-07-25 11:58:21.224662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.229516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.229560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:24.333 [2024-07-25 11:58:21.229606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:29:24.333 [2024-07-25 11:58:21.229620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.229861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.229886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:24.333 [2024-07-25 11:58:21.229907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:29:24.333 [2024-07-25 11:58:21.229927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.229980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.229996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:24.333 [2024-07-25 11:58:21.230014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:24.333 [2024-07-25 11:58:21.230026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.230069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:24.333 [2024-07-25 11:58:21.234419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.234485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:24.333 [2024-07-25 11:58:21.234503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:29:24.333 [2024-07-25 11:58:21.234521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.234663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.234720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:24.333 [2024-07-25 11:58:21.234746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:24.333 [2024-07-25 11:58:21.234764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.234798] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:24.333 [2024-07-25 11:58:21.234836] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:24.333 [2024-07-25 11:58:21.234893] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:24.333 [2024-07-25 11:58:21.234926] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:24.333 [2024-07-25 11:58:21.235035] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:24.333 [2024-07-25 11:58:21.235081] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:24.333 [2024-07-25 11:58:21.235097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:24.333 [2024-07-25 11:58:21.235117] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:24.333 [2024-07-25 11:58:21.235131] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:24.333 [2024-07-25 11:58:21.235149] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:24.333 [2024-07-25 11:58:21.235161] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:24.333 [2024-07-25 11:58:21.235201] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:24.333 [2024-07-25 11:58:21.235214] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:24.333 [2024-07-25 11:58:21.235236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.235249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:24.333 [2024-07-25 11:58:21.235265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:29:24.333 [2024-07-25 11:58:21.235283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.235384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.333 [2024-07-25 11:58:21.235411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:24.333 [2024-07-25 11:58:21.235428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:24.333 [2024-07-25 11:58:21.235440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.333 [2024-07-25 11:58:21.235561] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:24.333 [2024-07-25 11:58:21.235580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:24.333 [2024-07-25 11:58:21.235597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:24.333 [2024-07-25 11:58:21.235609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.333 [2024-07-25 11:58:21.235634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:24.333 [2024-07-25 11:58:21.235646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:24.333 [2024-07-25 11:58:21.235661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:24.333 [2024-07-25 11:58:21.235690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:24.333 [2024-07-25 11:58:21.235755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:24.333 [2024-07-25 11:58:21.235776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:24.333 [2024-07-25 11:58:21.235794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:24.333 [2024-07-25 11:58:21.235807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:24.333 [2024-07-25 11:58:21.235823] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:24.333 [2024-07-25 11:58:21.235836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:24.333 [2024-07-25 11:58:21.235852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:24.333 [2024-07-25 11:58:21.235864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.333 [2024-07-25 11:58:21.235880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:24.333 [2024-07-25 11:58:21.235892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:24.333 [2024-07-25 11:58:21.235908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.333 [2024-07-25 11:58:21.235920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:24.333 [2024-07-25 11:58:21.235936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:24.333 [2024-07-25 11:58:21.235948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.333 [2024-07-25 11:58:21.235966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:24.333 [2024-07-25 11:58:21.235978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:24.333 [2024-07-25 11:58:21.235998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.333 [2024-07-25 11:58:21.236010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:24.333 [2024-07-25 11:58:21.236040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:24.333 [2024-07-25 11:58:21.236065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.333 [2024-07-25 11:58:21.236085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:24.333 [2024-07-25 11:58:21.236111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:24.333 [2024-07-25 11:58:21.236126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:24.333 [2024-07-25 11:58:21.236138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:24.333 [2024-07-25 11:58:21.236152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:24.333 [2024-07-25 11:58:21.236163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:24.333 [2024-07-25 11:58:21.236179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:24.333 [2024-07-25 11:58:21.236190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:24.333 [2024-07-25 11:58:21.236204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:24.333 [2024-07-25 11:58:21.236215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:24.333 [2024-07-25 11:58:21.236246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:24.333 [2024-07-25 11:58:21.236258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.333 [2024-07-25 11:58:21.236277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:24.333 [2024-07-25 11:58:21.236289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:24.333 [2024-07-25 11:58:21.236305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.333 [2024-07-25 11:58:21.236317] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:24.333 [2024-07-25 11:58:21.236334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:24.333 [2024-07-25 11:58:21.236346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:24.334 [2024-07-25 11:58:21.236362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:24.334 [2024-07-25 11:58:21.236374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:24.334 [2024-07-25 11:58:21.236390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:24.334 [2024-07-25 11:58:21.236401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:24.334 [2024-07-25 11:58:21.236417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:24.334 [2024-07-25 11:58:21.236428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:24.334 [2024-07-25 11:58:21.236444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:24.334 [2024-07-25 11:58:21.236457] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:24.334 [2024-07-25 11:58:21.236494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:24.334 [2024-07-25 11:58:21.236509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:24.334 [2024-07-25 11:58:21.236531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:24.334 [2024-07-25 11:58:21.236544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:24.334 [2024-07-25 11:58:21.236560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:24.334 [2024-07-25 11:58:21.236573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:24.334 [2024-07-25 11:58:21.236589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:24.334 [2024-07-25 11:58:21.236602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:24.334 [2024-07-25 11:58:21.236633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:24.334 [2024-07-25 11:58:21.236645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:24.334 [2024-07-25 11:58:21.236661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:24.334 [2024-07-25 11:58:21.236673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:24.334 [2024-07-25 11:58:21.237043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:24.334 [2024-07-25 11:58:21.237297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:24.334 [2024-07-25 11:58:21.237500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:24.334 [2024-07-25 11:58:21.237736] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:24.334 [2024-07-25 11:58:21.237965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:24.334 [2024-07-25 11:58:21.238202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:24.334 [2024-07-25 11:58:21.238500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:24.334 [2024-07-25 11:58:21.238755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:24.334 [2024-07-25 11:58:21.239041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:24.334 [2024-07-25 11:58:21.239302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.239469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:24.334 [2024-07-25 11:58:21.239663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:29:24.334 [2024-07-25 11:58:21.239817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.275316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.275568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.334 [2024-07-25 11:58:21.275601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.271 ms 00:29:24.334 [2024-07-25 11:58:21.275617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.275836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.275864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:24.334 [2024-07-25 11:58:21.275880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:24.334 [2024-07-25 11:58:21.275894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.315230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.315293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:24.334 [2024-07-25 11:58:21.315313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.304 ms 00:29:24.334 [2024-07-25 11:58:21.315331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.315454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.315485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:24.334 [2024-07-25 11:58:21.315517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:24.334 [2024-07-25 11:58:21.315534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.315899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.315938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:24.334 [2024-07-25 11:58:21.315953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:29:24.334 [2024-07-25 11:58:21.315970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.316130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.316164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:24.334 [2024-07-25 11:58:21.316179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:29:24.334 [2024-07-25 11:58:21.316196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.335937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.336004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:24.334 [2024-07-25 11:58:21.336022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.710 ms 00:29:24.334 [2024-07-25 11:58:21.336039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.334 [2024-07-25 11:58:21.352326] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:24.334 [2024-07-25 11:58:21.352373] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:24.334 [2024-07-25 11:58:21.352412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.334 [2024-07-25 11:58:21.352429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:24.334 [2024-07-25 11:58:21.352442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.201 ms 00:29:24.334 [2024-07-25 11:58:21.352458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.593 [2024-07-25 11:58:21.380748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.593 [2024-07-25 11:58:21.380825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:24.594 [2024-07-25 11:58:21.380847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.200 ms 00:29:24.594 [2024-07-25 11:58:21.380873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.395432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.395498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:24.594 [2024-07-25 11:58:21.395529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.467 ms 00:29:24.594 [2024-07-25 11:58:21.395551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.410127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.410190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:24.594 [2024-07-25 11:58:21.410207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.488 ms 00:29:24.594 [2024-07-25 11:58:21.410224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.411075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.411121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:24.594 [2024-07-25 11:58:21.411138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:29:24.594 [2024-07-25 11:58:21.411155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.491165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.491257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:24.594 [2024-07-25 11:58:21.491279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.977 ms 00:29:24.594 [2024-07-25 11:58:21.491294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.503513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:24.594 [2024-07-25 11:58:21.516774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.516833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:24.594 [2024-07-25 11:58:21.516873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.337 ms 00:29:24.594 [2024-07-25 11:58:21.516886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.517016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.517036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:24.594 [2024-07-25 11:58:21.517052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:24.594 [2024-07-25 11:58:21.517064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.517130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.517147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:24.594 [2024-07-25 11:58:21.517165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:24.594 [2024-07-25 11:58:21.517176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.517209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.517223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:24.594 [2024-07-25 11:58:21.517236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:24.594 [2024-07-25 11:58:21.517247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.517290] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:24.594 [2024-07-25 11:58:21.517306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.517320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:24.594 [2024-07-25 11:58:21.517333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:24.594 [2024-07-25 11:58:21.517349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.546352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.546416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:24.594 [2024-07-25 11:58:21.546434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.975 ms 00:29:24.594 [2024-07-25 11:58:21.546449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.546570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.594 [2024-07-25 11:58:21.546626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:24.594 [2024-07-25 11:58:21.546644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:24.594 [2024-07-25 11:58:21.546658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.594 [2024-07-25 11:58:21.547723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:24.594 [2024-07-25 11:58:21.551647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.505 ms, result 0 00:29:24.594 [2024-07-25 11:58:21.552671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:24.594 Some configs were skipped because the RPC state that can call them passed over. 00:29:24.594 11:58:21 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:29:24.853 [2024-07-25 11:58:21.837959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.853 [2024-07-25 11:58:21.838197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:24.853 [2024-07-25 11:58:21.838352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:29:24.853 [2024-07-25 11:58:21.838500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.853 [2024-07-25 11:58:21.838642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.124 ms, result 0 00:29:24.853 true 00:29:24.853 11:58:21 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:29:25.112 [2024-07-25 11:58:22.057919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.112 [2024-07-25 11:58:22.058160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:25.112 [2024-07-25 11:58:22.058291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:29:25.112 [2024-07-25 11:58:22.058439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.112 [2024-07-25 11:58:22.058544] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.725 ms, result 0 00:29:25.112 true 00:29:25.112 11:58:22 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79680 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79680 ']' 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79680 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79680 00:29:25.112 killing process with pid 79680 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79680' 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79680 00:29:25.112 11:58:22 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79680 00:29:26.049 [2024-07-25 11:58:22.989226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.049 [2024-07-25 11:58:22.989298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:26.049 [2024-07-25 11:58:22.989339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:26.050 [2024-07-25 11:58:22.989356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:22.989391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:26.050 [2024-07-25 11:58:22.992799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:22.992837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:26.050 [2024-07-25 11:58:22.992868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:29:26.050 [2024-07-25 11:58:22.992884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:22.993206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:22.993231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:26.050 [2024-07-25 11:58:22.993245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:29:26.050 [2024-07-25 11:58:22.993259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:22.997516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:22.997569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:26.050 [2024-07-25 11:58:22.997587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:29:26.050 [2024-07-25 11:58:22.997602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.005436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.005478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:26.050 [2024-07-25 11:58:23.005494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.787 ms 00:29:26.050 [2024-07-25 11:58:23.005511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.018510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.018605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:26.050 [2024-07-25 11:58:23.018642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:29:26.050 [2024-07-25 11:58:23.018659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.027419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.027470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:26.050 [2024-07-25 11:58:23.027487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.676 ms 00:29:26.050 [2024-07-25 11:58:23.027501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.027652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.027677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:26.050 [2024-07-25 11:58:23.027712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:29:26.050 [2024-07-25 11:58:23.027759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.040419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.040462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:26.050 [2024-07-25 11:58:23.040494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.633 ms 00:29:26.050 [2024-07-25 11:58:23.040508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.053137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.053215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:26.050 [2024-07-25 11:58:23.053234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.586 ms 00:29:26.050 [2024-07-25 11:58:23.053253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.065947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.065994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:26.050 [2024-07-25 11:58:23.066011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.648 ms 00:29:26.050 [2024-07-25 11:58:23.066025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.078744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.050 [2024-07-25 11:58:23.078791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:26.050 [2024-07-25 11:58:23.078809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.645 ms 00:29:26.050 [2024-07-25 11:58:23.078822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.050 [2024-07-25 11:58:23.078867] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:26.050 [2024-07-25 11:58:23.078895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.078913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.078927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.078940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.078953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.078966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.078982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.078995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:26.050 [2024-07-25 11:58:23.079566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.079992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:26.051 [2024-07-25 11:58:23.080267] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:26.051 [2024-07-25 11:58:23.080279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d517d354-a105-4bd1-9c44-04083dc2667e 00:29:26.051 [2024-07-25 11:58:23.080295] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:26.051 [2024-07-25 11:58:23.080306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:26.051 [2024-07-25 11:58:23.080319] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:26.051 [2024-07-25 11:58:23.080331] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:26.051 [2024-07-25 11:58:23.080344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:26.051 [2024-07-25 11:58:23.080355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:26.051 [2024-07-25 11:58:23.080369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:26.051 [2024-07-25 11:58:23.080379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:26.051 [2024-07-25 11:58:23.080404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:26.051 [2024-07-25 11:58:23.080416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.051 [2024-07-25 11:58:23.080430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:26.051 [2024-07-25 11:58:23.080443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:29:26.051 [2024-07-25 11:58:23.080460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.310 [2024-07-25 11:58:23.097367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.310 [2024-07-25 11:58:23.097428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:26.310 [2024-07-25 11:58:23.097446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.848 ms 00:29:26.310 [2024-07-25 11:58:23.097463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.310 [2024-07-25 11:58:23.098001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.310 [2024-07-25 11:58:23.098042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:26.310 [2024-07-25 11:58:23.098062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:29:26.310 [2024-07-25 11:58:23.098076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.310 [2024-07-25 11:58:23.153013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.310 [2024-07-25 11:58:23.153071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:26.310 [2024-07-25 11:58:23.153105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.310 [2024-07-25 11:58:23.153119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.310 [2024-07-25 11:58:23.153234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.310 [2024-07-25 11:58:23.153258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:26.310 [2024-07-25 11:58:23.153274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.310 [2024-07-25 11:58:23.153288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.310 [2024-07-25 11:58:23.153348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.310 [2024-07-25 11:58:23.153371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:26.310 [2024-07-25 11:58:23.153384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.310 [2024-07-25 11:58:23.153400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.310 [2024-07-25 11:58:23.153425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.310 [2024-07-25 11:58:23.153441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:26.310 [2024-07-25 11:58:23.153453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.310 [2024-07-25 11:58:23.153469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.310 [2024-07-25 11:58:23.249295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.310 [2024-07-25 11:58:23.249387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:26.310 [2024-07-25 11:58:23.249409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.310 [2024-07-25 11:58:23.249423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.337966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.311 [2024-07-25 11:58:23.338038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:26.311 [2024-07-25 11:58:23.338063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.311 [2024-07-25 11:58:23.338078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.338233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.311 [2024-07-25 11:58:23.338259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:26.311 [2024-07-25 11:58:23.338274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.311 [2024-07-25 11:58:23.338291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.338328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.311 [2024-07-25 11:58:23.338347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:26.311 [2024-07-25 11:58:23.338360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.311 [2024-07-25 11:58:23.338373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.338495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.311 [2024-07-25 11:58:23.338519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:26.311 [2024-07-25 11:58:23.338532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.311 [2024-07-25 11:58:23.338561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.338646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.311 [2024-07-25 11:58:23.338670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:26.311 [2024-07-25 11:58:23.338683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.311 [2024-07-25 11:58:23.338697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.338778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.311 [2024-07-25 11:58:23.338799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:26.311 [2024-07-25 11:58:23.338813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.311 [2024-07-25 11:58:23.338829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.338885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.311 [2024-07-25 11:58:23.338907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:26.311 [2024-07-25 11:58:23.338920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.311 [2024-07-25 11:58:23.338934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.311 [2024-07-25 11:58:23.339096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.869 ms, result 0 00:29:27.244 11:58:24 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:27.244 11:58:24 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:27.502 [2024-07-25 11:58:24.306940] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:27.502 [2024-07-25 11:58:24.307145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79744 ] 00:29:27.502 [2024-07-25 11:58:24.476103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.761 [2024-07-25 11:58:24.661351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.019 [2024-07-25 11:58:24.952338] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:28.019 [2024-07-25 11:58:24.952430] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:28.279 [2024-07-25 11:58:25.112515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.279 [2024-07-25 11:58:25.112568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:28.280 [2024-07-25 11:58:25.112604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:28.280 [2024-07-25 11:58:25.112615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.115980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.116027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:28.280 [2024-07-25 11:58:25.116044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:29:28.280 [2024-07-25 11:58:25.116056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.116185] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:28.280 [2024-07-25 11:58:25.117167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:28.280 [2024-07-25 11:58:25.117210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.117241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:28.280 [2024-07-25 11:58:25.117269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:29:28.280 [2024-07-25 11:58:25.117280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.118590] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:28.280 [2024-07-25 11:58:25.135507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.135549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:28.280 [2024-07-25 11:58:25.135588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.917 ms 00:29:28.280 [2024-07-25 11:58:25.135599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.135787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.135811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:28.280 [2024-07-25 11:58:25.135825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:28.280 [2024-07-25 11:58:25.135836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.140445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.140486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:28.280 [2024-07-25 11:58:25.140517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:29:28.280 [2024-07-25 11:58:25.140527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.140646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.140667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:28.280 [2024-07-25 11:58:25.140679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:28.280 [2024-07-25 11:58:25.140688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.140781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.140800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:28.280 [2024-07-25 11:58:25.140817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:28.280 [2024-07-25 11:58:25.140827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.140860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:28.280 [2024-07-25 11:58:25.145144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.145191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:28.280 [2024-07-25 11:58:25.145226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.292 ms 00:29:28.280 [2024-07-25 11:58:25.145237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.145306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.145325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:28.280 [2024-07-25 11:58:25.145337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:28.280 [2024-07-25 11:58:25.145347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.145376] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:28.280 [2024-07-25 11:58:25.145404] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:28.280 [2024-07-25 11:58:25.145447] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:28.280 [2024-07-25 11:58:25.145466] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:28.280 [2024-07-25 11:58:25.145560] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:28.280 [2024-07-25 11:58:25.145575] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:28.280 [2024-07-25 11:58:25.145587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:28.280 [2024-07-25 11:58:25.145601] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:28.280 [2024-07-25 11:58:25.145613] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:28.280 [2024-07-25 11:58:25.145628] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:28.280 [2024-07-25 11:58:25.145638] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:28.280 [2024-07-25 11:58:25.145648] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:28.280 [2024-07-25 11:58:25.145658] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:28.280 [2024-07-25 11:58:25.145668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.145678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:28.280 [2024-07-25 11:58:25.145689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:29:28.280 [2024-07-25 11:58:25.145715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.145879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.280 [2024-07-25 11:58:25.145898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:28.280 [2024-07-25 11:58:25.145917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:28.280 [2024-07-25 11:58:25.145928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.280 [2024-07-25 11:58:25.146038] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:28.280 [2024-07-25 11:58:25.146056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:28.280 [2024-07-25 11:58:25.146068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:28.280 [2024-07-25 11:58:25.146094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:28.280 [2024-07-25 11:58:25.146145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:28.280 [2024-07-25 11:58:25.146165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:28.280 [2024-07-25 11:58:25.146175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:28.280 [2024-07-25 11:58:25.146193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:28.280 [2024-07-25 11:58:25.146203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:28.280 [2024-07-25 11:58:25.146212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:28.280 [2024-07-25 11:58:25.146221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:28.280 [2024-07-25 11:58:25.146231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:28.280 [2024-07-25 11:58:25.146240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:28.280 [2024-07-25 11:58:25.146259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:28.280 [2024-07-25 11:58:25.146288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:28.280 [2024-07-25 11:58:25.146307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:28.280 [2024-07-25 11:58:25.146325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:28.280 [2024-07-25 11:58:25.146335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:28.280 [2024-07-25 11:58:25.146353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:28.280 [2024-07-25 11:58:25.146362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:28.280 [2024-07-25 11:58:25.146381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:28.280 [2024-07-25 11:58:25.146390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:28.280 [2024-07-25 11:58:25.146408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:28.280 [2024-07-25 11:58:25.146418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:28.280 [2024-07-25 11:58:25.146426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:28.280 [2024-07-25 11:58:25.146436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:28.280 [2024-07-25 11:58:25.146445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:28.280 [2024-07-25 11:58:25.146454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:28.280 [2024-07-25 11:58:25.146464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:28.280 [2024-07-25 11:58:25.146473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:28.280 [2024-07-25 11:58:25.146482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:28.281 [2024-07-25 11:58:25.146491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:28.281 [2024-07-25 11:58:25.146501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:28.281 [2024-07-25 11:58:25.146511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:28.281 [2024-07-25 11:58:25.146520] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:28.281 [2024-07-25 11:58:25.146530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:28.281 [2024-07-25 11:58:25.146540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:28.281 [2024-07-25 11:58:25.146550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:28.281 [2024-07-25 11:58:25.146565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:28.281 [2024-07-25 11:58:25.146575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:28.281 [2024-07-25 11:58:25.146584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:28.281 [2024-07-25 11:58:25.146604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:28.281 [2024-07-25 11:58:25.146633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:28.281 [2024-07-25 11:58:25.146643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:28.281 [2024-07-25 11:58:25.146655] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:28.281 [2024-07-25 11:58:25.146669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:28.281 [2024-07-25 11:58:25.146682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:28.281 [2024-07-25 11:58:25.146693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:28.281 [2024-07-25 11:58:25.146760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:28.281 [2024-07-25 11:58:25.146779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:28.281 [2024-07-25 11:58:25.146791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:28.281 [2024-07-25 11:58:25.146803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:28.281 [2024-07-25 11:58:25.146814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:28.281 [2024-07-25 11:58:25.146826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:28.281 [2024-07-25 11:58:25.146837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:28.281 [2024-07-25 11:58:25.146848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:28.281 [2024-07-25 11:58:25.146858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:28.281 [2024-07-25 11:58:25.146869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:28.281 [2024-07-25 11:58:25.146880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:28.281 [2024-07-25 11:58:25.146891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:28.281 [2024-07-25 11:58:25.146902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:28.281 [2024-07-25 11:58:25.146914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:28.281 [2024-07-25 11:58:25.146926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:28.281 [2024-07-25 11:58:25.146938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:28.281 [2024-07-25 11:58:25.146949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:28.281 [2024-07-25 11:58:25.146966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:28.281 [2024-07-25 11:58:25.146978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.146989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:28.281 [2024-07-25 11:58:25.147001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:29:28.281 [2024-07-25 11:58:25.147012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.192906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.192964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:28.281 [2024-07-25 11:58:25.193007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.790 ms 00:29:28.281 [2024-07-25 11:58:25.193018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.193216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.193236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:28.281 [2024-07-25 11:58:25.193256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:28.281 [2024-07-25 11:58:25.193266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.228496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.228544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:28.281 [2024-07-25 11:58:25.228577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.198 ms 00:29:28.281 [2024-07-25 11:58:25.228588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.228766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.228788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:28.281 [2024-07-25 11:58:25.228802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:28.281 [2024-07-25 11:58:25.228812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.229169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.229205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:28.281 [2024-07-25 11:58:25.229221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:29:28.281 [2024-07-25 11:58:25.229233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.229394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.229419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:28.281 [2024-07-25 11:58:25.229433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:29:28.281 [2024-07-25 11:58:25.229444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.244951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.244991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:28.281 [2024-07-25 11:58:25.245024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.462 ms 00:29:28.281 [2024-07-25 11:58:25.245035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.260100] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:28.281 [2024-07-25 11:58:25.260143] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:28.281 [2024-07-25 11:58:25.260177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.260188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:28.281 [2024-07-25 11:58:25.260200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.985 ms 00:29:28.281 [2024-07-25 11:58:25.260210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.287417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.287459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:28.281 [2024-07-25 11:58:25.287493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.113 ms 00:29:28.281 [2024-07-25 11:58:25.287503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.281 [2024-07-25 11:58:25.302162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.281 [2024-07-25 11:58:25.302203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:28.281 [2024-07-25 11:58:25.302235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.564 ms 00:29:28.281 [2024-07-25 11:58:25.302245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.319574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.319638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:28.540 [2024-07-25 11:58:25.319672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.237 ms 00:29:28.540 [2024-07-25 11:58:25.319694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.320570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.320626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:28.540 [2024-07-25 11:58:25.320643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:29:28.540 [2024-07-25 11:58:25.320655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.391138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.391256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:28.540 [2024-07-25 11:58:25.391307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.417 ms 00:29:28.540 [2024-07-25 11:58:25.391326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.405340] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:28.540 [2024-07-25 11:58:25.419777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.419840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:28.540 [2024-07-25 11:58:25.419876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.240 ms 00:29:28.540 [2024-07-25 11:58:25.419887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.420033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.420055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:28.540 [2024-07-25 11:58:25.420068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:28.540 [2024-07-25 11:58:25.420079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.420142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.420158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:28.540 [2024-07-25 11:58:25.420170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:28.540 [2024-07-25 11:58:25.420180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.420212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.420232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:28.540 [2024-07-25 11:58:25.420244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:28.540 [2024-07-25 11:58:25.420254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.420289] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:28.540 [2024-07-25 11:58:25.420304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.420315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:28.540 [2024-07-25 11:58:25.420326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:28.540 [2024-07-25 11:58:25.420336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.449646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.449721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:28.540 [2024-07-25 11:58:25.449756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.281 ms 00:29:28.540 [2024-07-25 11:58:25.449768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.449915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.540 [2024-07-25 11:58:25.449937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:28.540 [2024-07-25 11:58:25.449950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:28.540 [2024-07-25 11:58:25.449961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.540 [2024-07-25 11:58:25.451011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:28.540 [2024-07-25 11:58:25.455080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.139 ms, result 0 00:29:28.540 [2024-07-25 11:58:25.456079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:28.540 [2024-07-25 11:58:25.471766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:39.040  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (24 MBps) Copying: 76/256 [MB] (24 MBps) Copying: 100/256 [MB] (24 MBps) Copying: 124/256 [MB] (23 MBps) Copying: 148/256 [MB] (24 MBps) Copying: 171/256 [MB] (22 MBps) Copying: 195/256 [MB] (23 MBps) Copying: 218/256 [MB] (23 MBps) Copying: 244/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-25 11:58:35.945249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:39.040 [2024-07-25 11:58:35.957856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:35.957904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:39.040 [2024-07-25 11:58:35.957924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:39.040 [2024-07-25 11:58:35.957935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:35.957975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:39.040 [2024-07-25 11:58:35.961368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:35.961408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:39.040 [2024-07-25 11:58:35.961423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:29:39.040 [2024-07-25 11:58:35.961434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:35.961733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:35.961753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:39.040 [2024-07-25 11:58:35.961765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:29:39.040 [2024-07-25 11:58:35.961775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:35.965579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:35.965614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:39.040 [2024-07-25 11:58:35.965637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.781 ms 00:29:39.040 [2024-07-25 11:58:35.965648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:35.973189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:35.973221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:39.040 [2024-07-25 11:58:35.973235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.513 ms 00:29:39.040 [2024-07-25 11:58:35.973246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:36.004271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:36.004326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:39.040 [2024-07-25 11:58:36.004349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.953 ms 00:29:39.040 [2024-07-25 11:58:36.004360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:36.022641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:36.022715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:39.040 [2024-07-25 11:58:36.022736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.213 ms 00:29:39.040 [2024-07-25 11:58:36.022769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:36.022938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:36.022960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:39.040 [2024-07-25 11:58:36.022974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:39.040 [2024-07-25 11:58:36.022985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.040 [2024-07-25 11:58:36.054900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.040 [2024-07-25 11:58:36.054943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:39.040 [2024-07-25 11:58:36.054960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.892 ms 00:29:39.040 [2024-07-25 11:58:36.054971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.300 [2024-07-25 11:58:36.087146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.300 [2024-07-25 11:58:36.087186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:39.300 [2024-07-25 11:58:36.087219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.107 ms 00:29:39.300 [2024-07-25 11:58:36.087229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.300 [2024-07-25 11:58:36.118329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.300 [2024-07-25 11:58:36.118370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:39.300 [2024-07-25 11:58:36.118403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.034 ms 00:29:39.300 [2024-07-25 11:58:36.118414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.300 [2024-07-25 11:58:36.149769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.300 [2024-07-25 11:58:36.149812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:39.300 [2024-07-25 11:58:36.149829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.260 ms 00:29:39.300 [2024-07-25 11:58:36.149840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.300 [2024-07-25 11:58:36.149924] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:39.300 [2024-07-25 11:58:36.149957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.149971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.149984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.149996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:39.300 [2024-07-25 11:58:36.150473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.150996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:39.301 [2024-07-25 11:58:36.151176] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:39.301 [2024-07-25 11:58:36.151187] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d517d354-a105-4bd1-9c44-04083dc2667e 00:29:39.301 [2024-07-25 11:58:36.151199] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:39.301 [2024-07-25 11:58:36.151210] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:39.301 [2024-07-25 11:58:36.151233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:39.301 [2024-07-25 11:58:36.151244] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:39.301 [2024-07-25 11:58:36.151254] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:39.301 [2024-07-25 11:58:36.151265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:39.301 [2024-07-25 11:58:36.151276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:39.301 [2024-07-25 11:58:36.151286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:39.301 [2024-07-25 11:58:36.151296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:39.301 [2024-07-25 11:58:36.151307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.301 [2024-07-25 11:58:36.151318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:39.301 [2024-07-25 11:58:36.151334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:29:39.301 [2024-07-25 11:58:36.151345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.301 [2024-07-25 11:58:36.168240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.301 [2024-07-25 11:58:36.168283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:39.301 [2024-07-25 11:58:36.168316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.868 ms 00:29:39.301 [2024-07-25 11:58:36.168328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.301 [2024-07-25 11:58:36.168818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.301 [2024-07-25 11:58:36.168854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:39.301 [2024-07-25 11:58:36.168869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:29:39.301 [2024-07-25 11:58:36.168880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.301 [2024-07-25 11:58:36.208843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.301 [2024-07-25 11:58:36.208893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:39.301 [2024-07-25 11:58:36.208926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.301 [2024-07-25 11:58:36.208936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.301 [2024-07-25 11:58:36.209033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.301 [2024-07-25 11:58:36.209053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:39.301 [2024-07-25 11:58:36.209065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.301 [2024-07-25 11:58:36.209076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.301 [2024-07-25 11:58:36.209134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.301 [2024-07-25 11:58:36.209152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:39.301 [2024-07-25 11:58:36.209164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.301 [2024-07-25 11:58:36.209175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.301 [2024-07-25 11:58:36.209199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.301 [2024-07-25 11:58:36.209212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:39.301 [2024-07-25 11:58:36.209229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.302 [2024-07-25 11:58:36.209239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.302 [2024-07-25 11:58:36.310007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.302 [2024-07-25 11:58:36.310078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:39.302 [2024-07-25 11:58:36.310097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.302 [2024-07-25 11:58:36.310108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.396625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.560 [2024-07-25 11:58:36.396781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:39.560 [2024-07-25 11:58:36.396802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.560 [2024-07-25 11:58:36.396813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.396924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.560 [2024-07-25 11:58:36.396942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:39.560 [2024-07-25 11:58:36.396954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.560 [2024-07-25 11:58:36.396965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.396999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.560 [2024-07-25 11:58:36.397013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:39.560 [2024-07-25 11:58:36.397024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.560 [2024-07-25 11:58:36.397040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.397164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.560 [2024-07-25 11:58:36.397184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:39.560 [2024-07-25 11:58:36.397195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.560 [2024-07-25 11:58:36.397206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.397296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.560 [2024-07-25 11:58:36.397314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:39.560 [2024-07-25 11:58:36.397326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.560 [2024-07-25 11:58:36.397337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.397393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.560 [2024-07-25 11:58:36.397409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:39.560 [2024-07-25 11:58:36.397421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.560 [2024-07-25 11:58:36.397431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.397490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.560 [2024-07-25 11:58:36.397514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:39.560 [2024-07-25 11:58:36.397528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.560 [2024-07-25 11:58:36.397545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.560 [2024-07-25 11:58:36.397717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.851 ms, result 0 00:29:40.494 00:29:40.494 00:29:40.494 11:58:37 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:29:40.494 11:58:37 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:41.435 11:58:38 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:41.435 [2024-07-25 11:58:38.204277] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:41.435 [2024-07-25 11:58:38.204452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79886 ] 00:29:41.435 [2024-07-25 11:58:38.375779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.694 [2024-07-25 11:58:38.564648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.953 [2024-07-25 11:58:38.872809] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:41.953 [2024-07-25 11:58:38.872899] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:42.213 [2024-07-25 11:58:39.033429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.033480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:42.213 [2024-07-25 11:58:39.033516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:42.213 [2024-07-25 11:58:39.033528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.036635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.036678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:42.213 [2024-07-25 11:58:39.036758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:29:42.213 [2024-07-25 11:58:39.036771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.036914] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:42.213 [2024-07-25 11:58:39.037959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:42.213 [2024-07-25 11:58:39.037999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.038030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:42.213 [2024-07-25 11:58:39.038043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:29:42.213 [2024-07-25 11:58:39.038054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.039367] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:42.213 [2024-07-25 11:58:39.056035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.056092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:42.213 [2024-07-25 11:58:39.056131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.669 ms 00:29:42.213 [2024-07-25 11:58:39.056143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.056260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.056282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:42.213 [2024-07-25 11:58:39.056295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:42.213 [2024-07-25 11:58:39.056305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.061089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.061133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:42.213 [2024-07-25 11:58:39.061149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.730 ms 00:29:42.213 [2024-07-25 11:58:39.061161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.061287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.061309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:42.213 [2024-07-25 11:58:39.061322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:42.213 [2024-07-25 11:58:39.061334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.061377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.061394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:42.213 [2024-07-25 11:58:39.061410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:42.213 [2024-07-25 11:58:39.061422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.061454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:42.213 [2024-07-25 11:58:39.065881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.065919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:42.213 [2024-07-25 11:58:39.065935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.436 ms 00:29:42.213 [2024-07-25 11:58:39.065947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.066018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.066037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:42.213 [2024-07-25 11:58:39.066064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:42.213 [2024-07-25 11:58:39.066075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.066122] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:42.213 [2024-07-25 11:58:39.066151] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:42.213 [2024-07-25 11:58:39.066195] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:42.213 [2024-07-25 11:58:39.066218] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:42.213 [2024-07-25 11:58:39.066319] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:42.213 [2024-07-25 11:58:39.066335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:42.213 [2024-07-25 11:58:39.066348] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:42.213 [2024-07-25 11:58:39.066362] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:42.213 [2024-07-25 11:58:39.066375] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:42.213 [2024-07-25 11:58:39.066392] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:42.213 [2024-07-25 11:58:39.066402] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:42.213 [2024-07-25 11:58:39.066413] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:42.213 [2024-07-25 11:58:39.066423] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:42.213 [2024-07-25 11:58:39.066434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.066445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:42.213 [2024-07-25 11:58:39.066456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:29:42.213 [2024-07-25 11:58:39.066467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.066579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.213 [2024-07-25 11:58:39.066622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:42.213 [2024-07-25 11:58:39.066641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:42.213 [2024-07-25 11:58:39.066652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.213 [2024-07-25 11:58:39.066786] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:42.213 [2024-07-25 11:58:39.066807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:42.213 [2024-07-25 11:58:39.066820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:42.213 [2024-07-25 11:58:39.066831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.214 [2024-07-25 11:58:39.066843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:42.214 [2024-07-25 11:58:39.066853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:42.214 [2024-07-25 11:58:39.066864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:42.214 [2024-07-25 11:58:39.066874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:42.214 [2024-07-25 11:58:39.066885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:42.214 [2024-07-25 11:58:39.066895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:42.214 [2024-07-25 11:58:39.066905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:42.214 [2024-07-25 11:58:39.066916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:42.214 [2024-07-25 11:58:39.066926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:42.214 [2024-07-25 11:58:39.066936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:42.214 [2024-07-25 11:58:39.066947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:42.214 [2024-07-25 11:58:39.066957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.214 [2024-07-25 11:58:39.066968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:42.214 [2024-07-25 11:58:39.066979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:42.214 [2024-07-25 11:58:39.067003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:42.214 [2024-07-25 11:58:39.067025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.214 [2024-07-25 11:58:39.067046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:42.214 [2024-07-25 11:58:39.067056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.214 [2024-07-25 11:58:39.067076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:42.214 [2024-07-25 11:58:39.067086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.214 [2024-07-25 11:58:39.067106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:42.214 [2024-07-25 11:58:39.067116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:42.214 [2024-07-25 11:58:39.067143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:42.214 [2024-07-25 11:58:39.067153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:42.214 [2024-07-25 11:58:39.067173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:42.214 [2024-07-25 11:58:39.067184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:42.214 [2024-07-25 11:58:39.067194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:42.214 [2024-07-25 11:58:39.067204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:42.214 [2024-07-25 11:58:39.067214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:42.214 [2024-07-25 11:58:39.067224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:42.214 [2024-07-25 11:58:39.067245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:42.214 [2024-07-25 11:58:39.067255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067265] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:42.214 [2024-07-25 11:58:39.067276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:42.214 [2024-07-25 11:58:39.067287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:42.214 [2024-07-25 11:58:39.067297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:42.214 [2024-07-25 11:58:39.067314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:42.214 [2024-07-25 11:58:39.067325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:42.214 [2024-07-25 11:58:39.067335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:42.214 [2024-07-25 11:58:39.067346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:42.214 [2024-07-25 11:58:39.067356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:42.214 [2024-07-25 11:58:39.067367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:42.214 [2024-07-25 11:58:39.067378] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:42.214 [2024-07-25 11:58:39.067392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:42.214 [2024-07-25 11:58:39.067406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:42.214 [2024-07-25 11:58:39.067417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:42.214 [2024-07-25 11:58:39.067429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:42.214 [2024-07-25 11:58:39.067440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:42.214 [2024-07-25 11:58:39.067451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:42.214 [2024-07-25 11:58:39.067463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:42.214 [2024-07-25 11:58:39.067474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:42.214 [2024-07-25 11:58:39.067485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:42.214 [2024-07-25 11:58:39.067496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:42.214 [2024-07-25 11:58:39.067507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:42.214 [2024-07-25 11:58:39.067519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:42.214 [2024-07-25 11:58:39.067530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:42.214 [2024-07-25 11:58:39.067541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:42.214 [2024-07-25 11:58:39.067552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:42.214 [2024-07-25 11:58:39.067563] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:42.214 [2024-07-25 11:58:39.067576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:42.214 [2024-07-25 11:58:39.067588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:42.214 [2024-07-25 11:58:39.067599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:42.214 [2024-07-25 11:58:39.067610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:42.214 [2024-07-25 11:58:39.067621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:42.214 [2024-07-25 11:58:39.067633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.214 [2024-07-25 11:58:39.067645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:42.214 [2024-07-25 11:58:39.067656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:29:42.214 [2024-07-25 11:58:39.067667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.214 [2024-07-25 11:58:39.105810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.214 [2024-07-25 11:58:39.105870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:42.214 [2024-07-25 11:58:39.105912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.053 ms 00:29:42.214 [2024-07-25 11:58:39.105925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.214 [2024-07-25 11:58:39.106129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.214 [2024-07-25 11:58:39.106164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:42.214 [2024-07-25 11:58:39.106183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:42.214 [2024-07-25 11:58:39.106194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.214 [2024-07-25 11:58:39.142502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.214 [2024-07-25 11:58:39.142563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:42.214 [2024-07-25 11:58:39.142622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.275 ms 00:29:42.214 [2024-07-25 11:58:39.142635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.214 [2024-07-25 11:58:39.142800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.214 [2024-07-25 11:58:39.142822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:42.214 [2024-07-25 11:58:39.142837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:42.214 [2024-07-25 11:58:39.142849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.143252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.143277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:42.215 [2024-07-25 11:58:39.143292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:29:42.215 [2024-07-25 11:58:39.143303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.143465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.143485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:42.215 [2024-07-25 11:58:39.143498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:29:42.215 [2024-07-25 11:58:39.143509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.159526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.159565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:42.215 [2024-07-25 11:58:39.159599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.988 ms 00:29:42.215 [2024-07-25 11:58:39.159610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.175409] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:42.215 [2024-07-25 11:58:39.175450] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:42.215 [2024-07-25 11:58:39.175484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.175496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:42.215 [2024-07-25 11:58:39.175508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.691 ms 00:29:42.215 [2024-07-25 11:58:39.175519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.203828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.203868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:42.215 [2024-07-25 11:58:39.203900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.217 ms 00:29:42.215 [2024-07-25 11:58:39.203912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.219584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.219624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:42.215 [2024-07-25 11:58:39.219657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.580 ms 00:29:42.215 [2024-07-25 11:58:39.219668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.235188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.235226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:42.215 [2024-07-25 11:58:39.235257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.396 ms 00:29:42.215 [2024-07-25 11:58:39.235268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.215 [2024-07-25 11:58:39.236092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.215 [2024-07-25 11:58:39.236129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:42.215 [2024-07-25 11:58:39.236145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:29:42.215 [2024-07-25 11:58:39.236157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.308809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.308871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:42.474 [2024-07-25 11:58:39.308907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.617 ms 00:29:42.474 [2024-07-25 11:58:39.308921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.321547] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:42.474 [2024-07-25 11:58:39.335312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.335378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:42.474 [2024-07-25 11:58:39.335413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.243 ms 00:29:42.474 [2024-07-25 11:58:39.335426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.335569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.335591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:42.474 [2024-07-25 11:58:39.335604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:42.474 [2024-07-25 11:58:39.335616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.335681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.335698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:42.474 [2024-07-25 11:58:39.335737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:42.474 [2024-07-25 11:58:39.335750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.335802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.335823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:42.474 [2024-07-25 11:58:39.335835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:42.474 [2024-07-25 11:58:39.335846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.335884] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:42.474 [2024-07-25 11:58:39.335901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.335913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:42.474 [2024-07-25 11:58:39.335925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:42.474 [2024-07-25 11:58:39.335936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.366998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.367061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:42.474 [2024-07-25 11:58:39.367096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.030 ms 00:29:42.474 [2024-07-25 11:58:39.367107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.367237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.474 [2024-07-25 11:58:39.367258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:42.474 [2024-07-25 11:58:39.367271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:42.474 [2024-07-25 11:58:39.367282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.474 [2024-07-25 11:58:39.368211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:42.474 [2024-07-25 11:58:39.372398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.449 ms, result 0 00:29:42.474 [2024-07-25 11:58:39.373250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:42.474 [2024-07-25 11:58:39.389928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:42.733  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-25 11:58:39.558852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:42.733 [2024-07-25 11:58:39.570548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.570599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:42.733 [2024-07-25 11:58:39.570638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:42.733 [2024-07-25 11:58:39.570650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.570691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:42.733 [2024-07-25 11:58:39.573995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.574029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:42.733 [2024-07-25 11:58:39.574044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:29:42.733 [2024-07-25 11:58:39.574056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.575668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.575756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:42.733 [2024-07-25 11:58:39.575775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:29:42.733 [2024-07-25 11:58:39.575787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.579856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.579901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:42.733 [2024-07-25 11:58:39.579926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:29:42.733 [2024-07-25 11:58:39.579938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.587671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.587732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:42.733 [2024-07-25 11:58:39.587749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.688 ms 00:29:42.733 [2024-07-25 11:58:39.587761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.619238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.619282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:42.733 [2024-07-25 11:58:39.619316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.393 ms 00:29:42.733 [2024-07-25 11:58:39.619328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.636934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.636976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:42.733 [2024-07-25 11:58:39.637010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.543 ms 00:29:42.733 [2024-07-25 11:58:39.637027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.637190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.637227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:42.733 [2024-07-25 11:58:39.637241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:42.733 [2024-07-25 11:58:39.637253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.668779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.668820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:42.733 [2024-07-25 11:58:39.668853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.502 ms 00:29:42.733 [2024-07-25 11:58:39.668865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.701097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.701168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:42.733 [2024-07-25 11:58:39.701201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.161 ms 00:29:42.733 [2024-07-25 11:58:39.701211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.731551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.731591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:42.733 [2024-07-25 11:58:39.731623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.258 ms 00:29:42.733 [2024-07-25 11:58:39.731634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.762523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.733 [2024-07-25 11:58:39.762599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:42.733 [2024-07-25 11:58:39.762617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.763 ms 00:29:42.733 [2024-07-25 11:58:39.762629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.733 [2024-07-25 11:58:39.762749] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:42.733 [2024-07-25 11:58:39.762809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.762993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:42.733 [2024-07-25 11:58:39.763407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.763996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:42.734 [2024-07-25 11:58:39.764287] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:42.734 [2024-07-25 11:58:39.764300] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d517d354-a105-4bd1-9c44-04083dc2667e 00:29:42.734 [2024-07-25 11:58:39.764312] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:42.734 [2024-07-25 11:58:39.764323] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:42.734 [2024-07-25 11:58:39.764349] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:42.734 [2024-07-25 11:58:39.764362] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:42.734 [2024-07-25 11:58:39.764373] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:42.734 [2024-07-25 11:58:39.764385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:42.734 [2024-07-25 11:58:39.764396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:42.734 [2024-07-25 11:58:39.764406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:42.734 [2024-07-25 11:58:39.764416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:42.734 [2024-07-25 11:58:39.764428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.734 [2024-07-25 11:58:39.764440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:42.734 [2024-07-25 11:58:39.764459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:29:42.734 [2024-07-25 11:58:39.764470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:39.781041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.993 [2024-07-25 11:58:39.781083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:42.993 [2024-07-25 11:58:39.781100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.539 ms 00:29:42.993 [2024-07-25 11:58:39.781112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:39.781571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:42.993 [2024-07-25 11:58:39.781594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:42.993 [2024-07-25 11:58:39.781609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:29:42.993 [2024-07-25 11:58:39.781620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:39.822677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:39.822765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:42.993 [2024-07-25 11:58:39.822786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:39.822798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:39.822933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:39.822950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:42.993 [2024-07-25 11:58:39.822963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:39.822974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:39.823047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:39.823066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:42.993 [2024-07-25 11:58:39.823079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:39.823090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:39.823126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:39.823147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:42.993 [2024-07-25 11:58:39.823159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:39.823171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:39.926138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:39.926215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:42.993 [2024-07-25 11:58:39.926235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:39.926247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.015389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:40.015483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:42.993 [2024-07-25 11:58:40.015505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:40.015517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.015606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:40.015624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:42.993 [2024-07-25 11:58:40.015637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:40.015649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.015685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:40.015732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:42.993 [2024-07-25 11:58:40.015745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:40.015763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.015888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:40.015908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:42.993 [2024-07-25 11:58:40.015921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:40.015932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.015984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:40.016002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:42.993 [2024-07-25 11:58:40.016014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:40.016032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.016081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:40.016103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:42.993 [2024-07-25 11:58:40.016115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:40.016127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.016182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:42.993 [2024-07-25 11:58:40.016200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:42.993 [2024-07-25 11:58:40.016213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:42.993 [2024-07-25 11:58:40.016230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:42.993 [2024-07-25 11:58:40.016398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 445.836 ms, result 0 00:29:44.367 00:29:44.367 00:29:44.367 11:58:41 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79918 00:29:44.367 11:58:41 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:29:44.367 11:58:41 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79918 00:29:44.367 11:58:41 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79918 ']' 00:29:44.367 11:58:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.367 11:58:41 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.367 11:58:41 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.367 11:58:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.367 11:58:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:44.367 [2024-07-25 11:58:41.215121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:44.367 [2024-07-25 11:58:41.215294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79918 ] 00:29:44.367 [2024-07-25 11:58:41.387362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.625 [2024-07-25 11:58:41.623403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.558 11:58:42 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:45.558 11:58:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:29:45.558 11:58:42 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:29:45.817 [2024-07-25 11:58:42.603421] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:45.817 [2024-07-25 11:58:42.603498] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:45.817 [2024-07-25 11:58:42.780056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.780120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:45.817 [2024-07-25 11:58:42.780142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:45.817 [2024-07-25 11:58:42.780156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.783397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.783460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:45.817 [2024-07-25 11:58:42.783478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:29:45.817 [2024-07-25 11:58:42.783493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.783616] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:45.817 [2024-07-25 11:58:42.784582] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:45.817 [2024-07-25 11:58:42.784628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.784647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:45.817 [2024-07-25 11:58:42.784661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:29:45.817 [2024-07-25 11:58:42.784678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.785948] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:45.817 [2024-07-25 11:58:42.801952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.801997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:45.817 [2024-07-25 11:58:42.802018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.000 ms 00:29:45.817 [2024-07-25 11:58:42.802030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.802163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.802186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:45.817 [2024-07-25 11:58:42.802203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:45.817 [2024-07-25 11:58:42.802215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.806717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.806760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:45.817 [2024-07-25 11:58:42.806784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.422 ms 00:29:45.817 [2024-07-25 11:58:42.806796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.806933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.806956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:45.817 [2024-07-25 11:58:42.806971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:29:45.817 [2024-07-25 11:58:42.806987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.807028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.807044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:45.817 [2024-07-25 11:58:42.807058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:45.817 [2024-07-25 11:58:42.807070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.807106] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:45.817 [2024-07-25 11:58:42.811341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.811385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:45.817 [2024-07-25 11:58:42.811402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.246 ms 00:29:45.817 [2024-07-25 11:58:42.811416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.811481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.811507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:45.817 [2024-07-25 11:58:42.811523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:45.817 [2024-07-25 11:58:42.811537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.811566] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:45.817 [2024-07-25 11:58:42.811595] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:45.817 [2024-07-25 11:58:42.811645] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:45.817 [2024-07-25 11:58:42.811674] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:45.817 [2024-07-25 11:58:42.811803] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:45.817 [2024-07-25 11:58:42.811835] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:45.817 [2024-07-25 11:58:42.811852] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:45.817 [2024-07-25 11:58:42.811870] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:45.817 [2024-07-25 11:58:42.811885] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:45.817 [2024-07-25 11:58:42.811900] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:45.817 [2024-07-25 11:58:42.811911] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:45.817 [2024-07-25 11:58:42.811924] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:45.817 [2024-07-25 11:58:42.811935] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:45.817 [2024-07-25 11:58:42.811952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.811964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:45.817 [2024-07-25 11:58:42.811978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:29:45.817 [2024-07-25 11:58:42.811992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.812090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.817 [2024-07-25 11:58:42.812105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:45.817 [2024-07-25 11:58:42.812119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:45.817 [2024-07-25 11:58:42.812130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.817 [2024-07-25 11:58:42.812273] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:45.817 [2024-07-25 11:58:42.812302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:45.817 [2024-07-25 11:58:42.812319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:45.817 [2024-07-25 11:58:42.812332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.817 [2024-07-25 11:58:42.812351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:45.817 [2024-07-25 11:58:42.812362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:45.817 [2024-07-25 11:58:42.812375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:45.817 [2024-07-25 11:58:42.812386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:45.817 [2024-07-25 11:58:42.812402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:45.817 [2024-07-25 11:58:42.812412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:45.817 [2024-07-25 11:58:42.812425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:45.817 [2024-07-25 11:58:42.812436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:45.817 [2024-07-25 11:58:42.812449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:45.817 [2024-07-25 11:58:42.812460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:45.817 [2024-07-25 11:58:42.812473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:45.817 [2024-07-25 11:58:42.812483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.817 [2024-07-25 11:58:42.812502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:45.817 [2024-07-25 11:58:42.812513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:45.817 [2024-07-25 11:58:42.812526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.817 [2024-07-25 11:58:42.812537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:45.818 [2024-07-25 11:58:42.812549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.818 [2024-07-25 11:58:42.812573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:45.818 [2024-07-25 11:58:42.812584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.818 [2024-07-25 11:58:42.812610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:45.818 [2024-07-25 11:58:42.812623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.818 [2024-07-25 11:58:42.812660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:45.818 [2024-07-25 11:58:42.812673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.818 [2024-07-25 11:58:42.812714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:45.818 [2024-07-25 11:58:42.812728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:45.818 [2024-07-25 11:58:42.812753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:45.818 [2024-07-25 11:58:42.812764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:45.818 [2024-07-25 11:58:42.812776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:45.818 [2024-07-25 11:58:42.812788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:45.818 [2024-07-25 11:58:42.812801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:45.818 [2024-07-25 11:58:42.812812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:45.818 [2024-07-25 11:58:42.812838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:45.818 [2024-07-25 11:58:42.812850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812861] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:45.818 [2024-07-25 11:58:42.812875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:45.818 [2024-07-25 11:58:42.812886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:45.818 [2024-07-25 11:58:42.812900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.818 [2024-07-25 11:58:42.812911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:45.818 [2024-07-25 11:58:42.812924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:45.818 [2024-07-25 11:58:42.812935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:45.818 [2024-07-25 11:58:42.812948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:45.818 [2024-07-25 11:58:42.812959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:45.818 [2024-07-25 11:58:42.812972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:45.818 [2024-07-25 11:58:42.812985] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:45.818 [2024-07-25 11:58:42.813002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.818 [2024-07-25 11:58:42.813017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:45.818 [2024-07-25 11:58:42.813034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:45.818 [2024-07-25 11:58:42.813046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:45.818 [2024-07-25 11:58:42.813059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:45.818 [2024-07-25 11:58:42.813072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:45.818 [2024-07-25 11:58:42.813085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:45.818 [2024-07-25 11:58:42.813099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:45.818 [2024-07-25 11:58:42.813113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:45.818 [2024-07-25 11:58:42.813125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:45.818 [2024-07-25 11:58:42.813139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:45.818 [2024-07-25 11:58:42.813151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:45.818 [2024-07-25 11:58:42.813165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:45.818 [2024-07-25 11:58:42.813177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:45.818 [2024-07-25 11:58:42.813192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:45.818 [2024-07-25 11:58:42.813204] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:45.818 [2024-07-25 11:58:42.813219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.818 [2024-07-25 11:58:42.813232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:45.818 [2024-07-25 11:58:42.813248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:45.818 [2024-07-25 11:58:42.813259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:45.818 [2024-07-25 11:58:42.813273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:45.818 [2024-07-25 11:58:42.813286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.818 [2024-07-25 11:58:42.813299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:45.818 [2024-07-25 11:58:42.813312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:29:45.818 [2024-07-25 11:58:42.813328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.818 [2024-07-25 11:58:42.846566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.818 [2024-07-25 11:58:42.846640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:45.818 [2024-07-25 11:58:42.846666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.160 ms 00:29:45.818 [2024-07-25 11:58:42.846681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.818 [2024-07-25 11:58:42.846882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.818 [2024-07-25 11:58:42.846909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:45.818 [2024-07-25 11:58:42.846924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:45.818 [2024-07-25 11:58:42.846938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.885420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.885479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.077 [2024-07-25 11:58:42.885499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.454 ms 00:29:46.077 [2024-07-25 11:58:42.885513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.885635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.885660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.077 [2024-07-25 11:58:42.885675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:46.077 [2024-07-25 11:58:42.885710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.886044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.886081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.077 [2024-07-25 11:58:42.886097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:29:46.077 [2024-07-25 11:58:42.886111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.886263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.886293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.077 [2024-07-25 11:58:42.886307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:29:46.077 [2024-07-25 11:58:42.886320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.903939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.903988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.077 [2024-07-25 11:58:42.904006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.590 ms 00:29:46.077 [2024-07-25 11:58:42.904021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.920343] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:46.077 [2024-07-25 11:58:42.920391] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:46.077 [2024-07-25 11:58:42.920413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.920429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:46.077 [2024-07-25 11:58:42.920442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.245 ms 00:29:46.077 [2024-07-25 11:58:42.920455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.950234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.950282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:46.077 [2024-07-25 11:58:42.950301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.670 ms 00:29:46.077 [2024-07-25 11:58:42.950319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.966027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.966076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:46.077 [2024-07-25 11:58:42.966106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.614 ms 00:29:46.077 [2024-07-25 11:58:42.966123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.981600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.981647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:46.077 [2024-07-25 11:58:42.981666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.384 ms 00:29:46.077 [2024-07-25 11:58:42.981680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:42.982499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:42.982542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:46.077 [2024-07-25 11:58:42.982559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:29:46.077 [2024-07-25 11:58:42.982573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:43.065175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:43.065251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:46.077 [2024-07-25 11:58:43.065275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.560 ms 00:29:46.077 [2024-07-25 11:58:43.065290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:43.078112] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:46.077 [2024-07-25 11:58:43.092302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:43.092371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:46.077 [2024-07-25 11:58:43.092398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.863 ms 00:29:46.077 [2024-07-25 11:58:43.092411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:43.092567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:43.092588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:46.077 [2024-07-25 11:58:43.092605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:46.077 [2024-07-25 11:58:43.092617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:43.092685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:43.092701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:46.077 [2024-07-25 11:58:43.092761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:46.077 [2024-07-25 11:58:43.092773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:43.092812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:43.092828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:46.077 [2024-07-25 11:58:43.092843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:46.077 [2024-07-25 11:58:43.092854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.077 [2024-07-25 11:58:43.092899] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:46.077 [2024-07-25 11:58:43.092915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.077 [2024-07-25 11:58:43.092932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:46.077 [2024-07-25 11:58:43.092945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:46.077 [2024-07-25 11:58:43.092961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.336 [2024-07-25 11:58:43.124506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.336 [2024-07-25 11:58:43.124591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:46.336 [2024-07-25 11:58:43.124612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.512 ms 00:29:46.336 [2024-07-25 11:58:43.124626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.336 [2024-07-25 11:58:43.124887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.336 [2024-07-25 11:58:43.124921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:46.336 [2024-07-25 11:58:43.124940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:46.336 [2024-07-25 11:58:43.124954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.336 [2024-07-25 11:58:43.126140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:46.336 [2024-07-25 11:58:43.130268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.723 ms, result 0 00:29:46.336 [2024-07-25 11:58:43.131381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:46.336 Some configs were skipped because the RPC state that can call them passed over. 00:29:46.336 11:58:43 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:29:46.593 [2024-07-25 11:58:43.393050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.593 [2024-07-25 11:58:43.393389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:46.593 [2024-07-25 11:58:43.393535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:29:46.593 [2024-07-25 11:58:43.393589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.593 [2024-07-25 11:58:43.393811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.344 ms, result 0 00:29:46.593 true 00:29:46.593 11:58:43 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:29:46.850 [2024-07-25 11:58:43.664922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.850 [2024-07-25 11:58:43.665138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:46.850 [2024-07-25 11:58:43.665271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:29:46.850 [2024-07-25 11:58:43.665412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.850 [2024-07-25 11:58:43.665524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.749 ms, result 0 00:29:46.850 true 00:29:46.850 11:58:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79918 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79918 ']' 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79918 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79918 00:29:46.850 killing process with pid 79918 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79918' 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79918 00:29:46.850 11:58:43 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79918 00:29:47.785 [2024-07-25 11:58:44.616186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.616255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:47.785 [2024-07-25 11:58:44.616295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:47.785 [2024-07-25 11:58:44.616309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.616344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:47.785 [2024-07-25 11:58:44.619533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.619571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:47.785 [2024-07-25 11:58:44.619602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:29:47.785 [2024-07-25 11:58:44.619617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.619908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.619931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:47.785 [2024-07-25 11:58:44.619943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:29:47.785 [2024-07-25 11:58:44.619955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.624060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.624121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:47.785 [2024-07-25 11:58:44.624140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:29:47.785 [2024-07-25 11:58:44.624154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.631291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.631329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:47.785 [2024-07-25 11:58:44.631360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.091 ms 00:29:47.785 [2024-07-25 11:58:44.631375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.642996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.643053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:47.785 [2024-07-25 11:58:44.643085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.566 ms 00:29:47.785 [2024-07-25 11:58:44.643100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.651553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.651613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:47.785 [2024-07-25 11:58:44.651647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.411 ms 00:29:47.785 [2024-07-25 11:58:44.651659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.651867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.651892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:47.785 [2024-07-25 11:58:44.651906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:29:47.785 [2024-07-25 11:58:44.651931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.663977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.664020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:47.785 [2024-07-25 11:58:44.664052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.022 ms 00:29:47.785 [2024-07-25 11:58:44.664064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.676618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.676677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:47.785 [2024-07-25 11:58:44.676694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:29:47.785 [2024-07-25 11:58:44.676743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.689537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.689584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:47.785 [2024-07-25 11:58:44.689601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.745 ms 00:29:47.785 [2024-07-25 11:58:44.689630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.702163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.785 [2024-07-25 11:58:44.702205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:47.785 [2024-07-25 11:58:44.702236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.413 ms 00:29:47.785 [2024-07-25 11:58:44.702249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.785 [2024-07-25 11:58:44.702290] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:47.785 [2024-07-25 11:58:44.702318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:47.785 [2024-07-25 11:58:44.702335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:47.785 [2024-07-25 11:58:44.702348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.702989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:47.786 [2024-07-25 11:58:44.703479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:47.787 [2024-07-25 11:58:44.703711] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:47.787 [2024-07-25 11:58:44.703726] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d517d354-a105-4bd1-9c44-04083dc2667e 00:29:47.787 [2024-07-25 11:58:44.703742] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:47.787 [2024-07-25 11:58:44.703754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:47.787 [2024-07-25 11:58:44.703766] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:47.787 [2024-07-25 11:58:44.703778] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:47.787 [2024-07-25 11:58:44.703790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:47.787 [2024-07-25 11:58:44.703802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:47.787 [2024-07-25 11:58:44.703815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:47.787 [2024-07-25 11:58:44.703825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:47.787 [2024-07-25 11:58:44.703850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:47.787 [2024-07-25 11:58:44.703862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.787 [2024-07-25 11:58:44.703876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:47.787 [2024-07-25 11:58:44.703888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:29:47.787 [2024-07-25 11:58:44.703904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.787 [2024-07-25 11:58:44.720189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.787 [2024-07-25 11:58:44.720281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:47.787 [2024-07-25 11:58:44.720300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.242 ms 00:29:47.787 [2024-07-25 11:58:44.720317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.787 [2024-07-25 11:58:44.720887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.787 [2024-07-25 11:58:44.720923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:47.787 [2024-07-25 11:58:44.720943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:29:47.787 [2024-07-25 11:58:44.720957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.787 [2024-07-25 11:58:44.773395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.787 [2024-07-25 11:58:44.773486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:47.787 [2024-07-25 11:58:44.773522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.787 [2024-07-25 11:58:44.773551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.787 [2024-07-25 11:58:44.773689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.787 [2024-07-25 11:58:44.773713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:47.787 [2024-07-25 11:58:44.773729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.787 [2024-07-25 11:58:44.773766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.787 [2024-07-25 11:58:44.773851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.787 [2024-07-25 11:58:44.773874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:47.787 [2024-07-25 11:58:44.773888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.787 [2024-07-25 11:58:44.773904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.787 [2024-07-25 11:58:44.773930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.787 [2024-07-25 11:58:44.773946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:47.787 [2024-07-25 11:58:44.773958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.787 [2024-07-25 11:58:44.773974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.868309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.868430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:48.046 [2024-07-25 11:58:44.868450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.868465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.946583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.946682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:48.046 [2024-07-25 11:58:44.946725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.946761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.946855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.946879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:48.046 [2024-07-25 11:58:44.946893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.946909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.946946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.946963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:48.046 [2024-07-25 11:58:44.946976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.946989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.947147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.947172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:48.046 [2024-07-25 11:58:44.947186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.947199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.947252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.947281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:48.046 [2024-07-25 11:58:44.947295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.947308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.947360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.947379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:48.046 [2024-07-25 11:58:44.947391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.947407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.947463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.046 [2024-07-25 11:58:44.947484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:48.046 [2024-07-25 11:58:44.947498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.046 [2024-07-25 11:58:44.947514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.046 [2024-07-25 11:58:44.947674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.501 ms, result 0 00:29:48.980 11:58:45 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:48.980 [2024-07-25 11:58:45.958992] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:48.980 [2024-07-25 11:58:45.959430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79982 ] 00:29:49.238 [2024-07-25 11:58:46.146535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.497 [2024-07-25 11:58:46.330545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.755 [2024-07-25 11:58:46.639544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.755 [2024-07-25 11:58:46.639633] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:50.015 [2024-07-25 11:58:46.800729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.015 [2024-07-25 11:58:46.800780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:50.015 [2024-07-25 11:58:46.800816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:50.015 [2024-07-25 11:58:46.800827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.015 [2024-07-25 11:58:46.804034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.015 [2024-07-25 11:58:46.804077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:50.015 [2024-07-25 11:58:46.804110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.177 ms 00:29:50.015 [2024-07-25 11:58:46.804121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.015 [2024-07-25 11:58:46.804255] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:50.015 [2024-07-25 11:58:46.805279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:50.015 [2024-07-25 11:58:46.805323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.015 [2024-07-25 11:58:46.805338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:50.015 [2024-07-25 11:58:46.805350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:29:50.015 [2024-07-25 11:58:46.805361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.015 [2024-07-25 11:58:46.806627] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:50.015 [2024-07-25 11:58:46.823353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.015 [2024-07-25 11:58:46.823398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:50.015 [2024-07-25 11:58:46.823421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.728 ms 00:29:50.015 [2024-07-25 11:58:46.823433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.823571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.823593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:50.016 [2024-07-25 11:58:46.823606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:50.016 [2024-07-25 11:58:46.823617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.828197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.828241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:50.016 [2024-07-25 11:58:46.828257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.524 ms 00:29:50.016 [2024-07-25 11:58:46.828268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.828397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.828418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:50.016 [2024-07-25 11:58:46.828431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:29:50.016 [2024-07-25 11:58:46.828442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.828485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.828501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:50.016 [2024-07-25 11:58:46.828517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:50.016 [2024-07-25 11:58:46.828543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.828608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:50.016 [2024-07-25 11:58:46.833043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.833078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:50.016 [2024-07-25 11:58:46.833109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.445 ms 00:29:50.016 [2024-07-25 11:58:46.833119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.833227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.833247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:50.016 [2024-07-25 11:58:46.833259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:50.016 [2024-07-25 11:58:46.833271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.833304] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:50.016 [2024-07-25 11:58:46.833332] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:50.016 [2024-07-25 11:58:46.833378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:50.016 [2024-07-25 11:58:46.833400] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:50.016 [2024-07-25 11:58:46.833505] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:50.016 [2024-07-25 11:58:46.833521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:50.016 [2024-07-25 11:58:46.833550] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:50.016 [2024-07-25 11:58:46.833563] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:50.016 [2024-07-25 11:58:46.833575] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:50.016 [2024-07-25 11:58:46.833591] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:50.016 [2024-07-25 11:58:46.833601] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:50.016 [2024-07-25 11:58:46.833611] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:50.016 [2024-07-25 11:58:46.833621] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:50.016 [2024-07-25 11:58:46.833632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.833642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:50.016 [2024-07-25 11:58:46.833653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:29:50.016 [2024-07-25 11:58:46.833663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.833807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.016 [2024-07-25 11:58:46.833826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:50.016 [2024-07-25 11:58:46.833843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:29:50.016 [2024-07-25 11:58:46.833853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.016 [2024-07-25 11:58:46.833976] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:50.016 [2024-07-25 11:58:46.833993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:50.016 [2024-07-25 11:58:46.834005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:50.016 [2024-07-25 11:58:46.834037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:50.016 [2024-07-25 11:58:46.834067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.016 [2024-07-25 11:58:46.834087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:50.016 [2024-07-25 11:58:46.834097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:50.016 [2024-07-25 11:58:46.834106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.016 [2024-07-25 11:58:46.834116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:50.016 [2024-07-25 11:58:46.834126] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:50.016 [2024-07-25 11:58:46.834152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:50.016 [2024-07-25 11:58:46.834172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:50.016 [2024-07-25 11:58:46.834218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:50.016 [2024-07-25 11:58:46.834248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:50.016 [2024-07-25 11:58:46.834278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:50.016 [2024-07-25 11:58:46.834308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:50.016 [2024-07-25 11:58:46.834338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.016 [2024-07-25 11:58:46.834364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:50.016 [2024-07-25 11:58:46.834374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:50.016 [2024-07-25 11:58:46.834383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.016 [2024-07-25 11:58:46.834393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:50.016 [2024-07-25 11:58:46.834403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:50.016 [2024-07-25 11:58:46.834413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:50.016 [2024-07-25 11:58:46.834432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:50.016 [2024-07-25 11:58:46.834442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834452] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:50.016 [2024-07-25 11:58:46.834463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:50.016 [2024-07-25 11:58:46.834473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.016 [2024-07-25 11:58:46.834500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:50.016 [2024-07-25 11:58:46.834510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:50.016 [2024-07-25 11:58:46.834520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:50.016 [2024-07-25 11:58:46.834531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:50.016 [2024-07-25 11:58:46.834542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:50.016 [2024-07-25 11:58:46.834552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:50.016 [2024-07-25 11:58:46.834564] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:50.016 [2024-07-25 11:58:46.834577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.016 [2024-07-25 11:58:46.834603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:50.017 [2024-07-25 11:58:46.834615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:50.017 [2024-07-25 11:58:46.834626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:50.017 [2024-07-25 11:58:46.834638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:50.017 [2024-07-25 11:58:46.834649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:50.017 [2024-07-25 11:58:46.834660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:50.017 [2024-07-25 11:58:46.834671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:50.017 [2024-07-25 11:58:46.834682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:50.017 [2024-07-25 11:58:46.834707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:50.017 [2024-07-25 11:58:46.834720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:50.017 [2024-07-25 11:58:46.834731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:50.017 [2024-07-25 11:58:46.834743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:50.017 [2024-07-25 11:58:46.834754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:50.017 [2024-07-25 11:58:46.834781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:50.017 [2024-07-25 11:58:46.834794] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:50.017 [2024-07-25 11:58:46.834806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.017 [2024-07-25 11:58:46.834819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:50.017 [2024-07-25 11:58:46.834830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:50.017 [2024-07-25 11:58:46.834842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:50.017 [2024-07-25 11:58:46.834853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:50.017 [2024-07-25 11:58:46.834865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.834876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:50.017 [2024-07-25 11:58:46.834888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:29:50.017 [2024-07-25 11:58:46.834898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.880743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.880801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:50.017 [2024-07-25 11:58:46.880843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.761 ms 00:29:50.017 [2024-07-25 11:58:46.880855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.881066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.881094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:50.017 [2024-07-25 11:58:46.881115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:50.017 [2024-07-25 11:58:46.881127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.916996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.917049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:50.017 [2024-07-25 11:58:46.917083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.834 ms 00:29:50.017 [2024-07-25 11:58:46.917108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.917250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.917269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:50.017 [2024-07-25 11:58:46.917281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:50.017 [2024-07-25 11:58:46.917292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.917623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.917641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:50.017 [2024-07-25 11:58:46.917652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:29:50.017 [2024-07-25 11:58:46.917662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.917854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.917875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:50.017 [2024-07-25 11:58:46.917903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:29:50.017 [2024-07-25 11:58:46.917914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.933363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.933402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:50.017 [2024-07-25 11:58:46.933434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.419 ms 00:29:50.017 [2024-07-25 11:58:46.933444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.948679] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:50.017 [2024-07-25 11:58:46.948760] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:50.017 [2024-07-25 11:58:46.948795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.948807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:50.017 [2024-07-25 11:58:46.948820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.213 ms 00:29:50.017 [2024-07-25 11:58:46.948830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.978483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.978528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:50.017 [2024-07-25 11:58:46.978547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.555 ms 00:29:50.017 [2024-07-25 11:58:46.978558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:46.994737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:46.994780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:50.017 [2024-07-25 11:58:46.994797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.075 ms 00:29:50.017 [2024-07-25 11:58:46.994809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:47.010898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:47.010941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:50.017 [2024-07-25 11:58:47.010958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.995 ms 00:29:50.017 [2024-07-25 11:58:47.010968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.017 [2024-07-25 11:58:47.011814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.017 [2024-07-25 11:58:47.011844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:50.017 [2024-07-25 11:58:47.011858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:29:50.017 [2024-07-25 11:58:47.011869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.085960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.086029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:50.276 [2024-07-25 11:58:47.086049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.055 ms 00:29:50.276 [2024-07-25 11:58:47.086061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.098812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:50.276 [2024-07-25 11:58:47.112926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.112992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:50.276 [2024-07-25 11:58:47.113012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.682 ms 00:29:50.276 [2024-07-25 11:58:47.113039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.113193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.113229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:50.276 [2024-07-25 11:58:47.113242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:50.276 [2024-07-25 11:58:47.113253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.113318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.113335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:50.276 [2024-07-25 11:58:47.113346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:50.276 [2024-07-25 11:58:47.113358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.113390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.113410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:50.276 [2024-07-25 11:58:47.113422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:50.276 [2024-07-25 11:58:47.113432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.113470] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:50.276 [2024-07-25 11:58:47.113486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.113497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:50.276 [2024-07-25 11:58:47.113509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:50.276 [2024-07-25 11:58:47.113519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.145102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.145171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:50.276 [2024-07-25 11:58:47.145206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.552 ms 00:29:50.276 [2024-07-25 11:58:47.145218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.145350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.276 [2024-07-25 11:58:47.145371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:50.276 [2024-07-25 11:58:47.145385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:50.276 [2024-07-25 11:58:47.145397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.276 [2024-07-25 11:58:47.146371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:50.276 [2024-07-25 11:58:47.150554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.345 ms, result 0 00:29:50.276 [2024-07-25 11:58:47.151382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:50.276 [2024-07-25 11:58:47.167853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:00.739  Copying: 27/256 [MB] (27 MBps) Copying: 52/256 [MB] (25 MBps) Copying: 77/256 [MB] (25 MBps) Copying: 102/256 [MB] (24 MBps) Copying: 125/256 [MB] (23 MBps) Copying: 151/256 [MB] (25 MBps) Copying: 177/256 [MB] (25 MBps) Copying: 202/256 [MB] (25 MBps) Copying: 225/256 [MB] (22 MBps) Copying: 249/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-25 11:58:57.539235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:00.739 [2024-07-25 11:58:57.558814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.739 [2024-07-25 11:58:57.558861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:00.739 [2024-07-25 11:58:57.558881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:00.739 [2024-07-25 11:58:57.558898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.739 [2024-07-25 11:58:57.558959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:00.739 [2024-07-25 11:58:57.562621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.739 [2024-07-25 11:58:57.562685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:00.739 [2024-07-25 11:58:57.562721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:30:00.739 [2024-07-25 11:58:57.562733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.739 [2024-07-25 11:58:57.563024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.739 [2024-07-25 11:58:57.563048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:00.739 [2024-07-25 11:58:57.563062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:30:00.739 [2024-07-25 11:58:57.563073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.739 [2024-07-25 11:58:57.567049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.739 [2024-07-25 11:58:57.567087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:00.739 [2024-07-25 11:58:57.567127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:30:00.739 [2024-07-25 11:58:57.567138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.739 [2024-07-25 11:58:57.575093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.739 [2024-07-25 11:58:57.575134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:00.739 [2024-07-25 11:58:57.575167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.925 ms 00:30:00.739 [2024-07-25 11:58:57.575179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.739 [2024-07-25 11:58:57.608046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.739 [2024-07-25 11:58:57.608106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:00.739 [2024-07-25 11:58:57.608140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.792 ms 00:30:00.739 [2024-07-25 11:58:57.608151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.739 [2024-07-25 11:58:57.625839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.739 [2024-07-25 11:58:57.625885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:00.740 [2024-07-25 11:58:57.625903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.618 ms 00:30:00.740 [2024-07-25 11:58:57.625923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.740 [2024-07-25 11:58:57.626100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.740 [2024-07-25 11:58:57.626122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:00.740 [2024-07-25 11:58:57.626135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:30:00.740 [2024-07-25 11:58:57.626146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.740 [2024-07-25 11:58:57.657764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.740 [2024-07-25 11:58:57.657808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:00.740 [2024-07-25 11:58:57.657825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.595 ms 00:30:00.740 [2024-07-25 11:58:57.657837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.740 [2024-07-25 11:58:57.697915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.740 [2024-07-25 11:58:57.697987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:00.740 [2024-07-25 11:58:57.698046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.004 ms 00:30:00.740 [2024-07-25 11:58:57.698066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.740 [2024-07-25 11:58:57.739635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.740 [2024-07-25 11:58:57.739682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:00.740 [2024-07-25 11:58:57.739717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.471 ms 00:30:00.740 [2024-07-25 11:58:57.739729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.740 [2024-07-25 11:58:57.770644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.740 [2024-07-25 11:58:57.770704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:00.740 [2024-07-25 11:58:57.770724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.806 ms 00:30:00.740 [2024-07-25 11:58:57.770736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.740 [2024-07-25 11:58:57.770837] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:00.740 [2024-07-25 11:58:57.770867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.770998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:00.740 [2024-07-25 11:58:57.771633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.771990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.772001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.772013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.772024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:00.741 [2024-07-25 11:58:57.772045] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:00.741 [2024-07-25 11:58:57.772056] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d517d354-a105-4bd1-9c44-04083dc2667e 00:30:00.741 [2024-07-25 11:58:57.772068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:00.741 [2024-07-25 11:58:57.772078] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:00.741 [2024-07-25 11:58:57.772101] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:00.741 [2024-07-25 11:58:57.772112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:00.741 [2024-07-25 11:58:57.772122] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:00.741 [2024-07-25 11:58:57.772133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:00.741 [2024-07-25 11:58:57.772144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:00.741 [2024-07-25 11:58:57.772153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:00.741 [2024-07-25 11:58:57.772163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:00.741 [2024-07-25 11:58:57.772174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.741 [2024-07-25 11:58:57.772185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:00.741 [2024-07-25 11:58:57.772202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:30:00.741 [2024-07-25 11:58:57.772213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:57.788655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.000 [2024-07-25 11:58:57.788706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:01.000 [2024-07-25 11:58:57.788723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.414 ms 00:30:01.000 [2024-07-25 11:58:57.788735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:57.789180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.000 [2024-07-25 11:58:57.789217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:01.000 [2024-07-25 11:58:57.789232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:30:01.000 [2024-07-25 11:58:57.789243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:57.829010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:57.829062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:01.000 [2024-07-25 11:58:57.829079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:57.829090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:57.829191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:57.829211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:01.000 [2024-07-25 11:58:57.829223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:57.829233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:57.829295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:57.829313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:01.000 [2024-07-25 11:58:57.829325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:57.829335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:57.829359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:57.829372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:01.000 [2024-07-25 11:58:57.829390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:57.829401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:57.928277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:57.928347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:01.000 [2024-07-25 11:58:57.928365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:57.928377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.013591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:58.013686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:01.000 [2024-07-25 11:58:58.013727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:58.013750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.013841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:58.013858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:01.000 [2024-07-25 11:58:58.013870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:58.013881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.013915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:58.013929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:01.000 [2024-07-25 11:58:58.013940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:58.013956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.014078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:58.014096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:01.000 [2024-07-25 11:58:58.014109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:58.014120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.014169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:58.014186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:01.000 [2024-07-25 11:58:58.014197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:58.014208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.014274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:58.014297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:01.000 [2024-07-25 11:58:58.014310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:58.014321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.014377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.000 [2024-07-25 11:58:58.014394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:01.000 [2024-07-25 11:58:58.014405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.000 [2024-07-25 11:58:58.014422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.000 [2024-07-25 11:58:58.014599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.802 ms, result 0 00:30:02.375 00:30:02.375 00:30:02.375 11:58:59 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:02.634 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:30:02.634 11:58:59 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:30:02.634 11:58:59 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:30:02.634 11:58:59 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:02.634 11:58:59 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:02.634 11:58:59 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:30:02.893 11:58:59 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:30:02.893 Process with pid 79918 is not found 00:30:02.893 11:58:59 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79918 00:30:02.893 11:58:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79918 ']' 00:30:02.893 11:58:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79918 00:30:02.893 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79918) - No such process 00:30:02.893 11:58:59 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 79918 is not found' 00:30:02.893 00:30:02.893 real 1m9.009s 00:30:02.893 user 1m32.943s 00:30:02.893 sys 0m6.720s 00:30:02.893 ************************************ 00:30:02.893 END TEST ftl_trim 00:30:02.893 ************************************ 00:30:02.893 11:58:59 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:02.893 11:58:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:30:02.893 11:58:59 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:02.893 11:58:59 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:02.893 11:58:59 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:02.893 11:58:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:02.893 ************************************ 00:30:02.893 START TEST ftl_restore 00:30:02.893 ************************************ 00:30:02.893 11:58:59 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:02.893 * Looking for test storage... 00:30:02.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:02.893 11:58:59 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.oywa7YnE5x 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80179 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80179 00:30:02.894 11:58:59 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 80179 ']' 00:30:02.894 11:58:59 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:02.894 11:58:59 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.894 11:58:59 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:02.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.894 11:58:59 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.894 11:58:59 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:02.894 11:58:59 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:30:03.154 [2024-07-25 11:59:00.045419] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:03.154 [2024-07-25 11:59:00.045970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80179 ] 00:30:03.413 [2024-07-25 11:59:00.224182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.671 [2024-07-25 11:59:00.450144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.237 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.237 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:30:04.237 11:59:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:04.237 11:59:01 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:30:04.237 11:59:01 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:04.237 11:59:01 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:30:04.237 11:59:01 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:30:04.237 11:59:01 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:04.499 11:59:01 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:04.499 11:59:01 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:30:04.499 11:59:01 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:04.499 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:04.499 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:04.499 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:04.499 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:04.499 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:05.066 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:05.066 { 00:30:05.066 "name": "nvme0n1", 00:30:05.066 "aliases": [ 00:30:05.066 "e42e80db-6bc9-4455-a591-7a8fae23bcdf" 00:30:05.066 ], 00:30:05.066 "product_name": "NVMe disk", 00:30:05.066 "block_size": 4096, 00:30:05.066 "num_blocks": 1310720, 00:30:05.066 "uuid": "e42e80db-6bc9-4455-a591-7a8fae23bcdf", 00:30:05.066 "assigned_rate_limits": { 00:30:05.066 "rw_ios_per_sec": 0, 00:30:05.066 "rw_mbytes_per_sec": 0, 00:30:05.066 "r_mbytes_per_sec": 0, 00:30:05.066 "w_mbytes_per_sec": 0 00:30:05.066 }, 00:30:05.066 "claimed": true, 00:30:05.066 "claim_type": "read_many_write_one", 00:30:05.066 "zoned": false, 00:30:05.066 "supported_io_types": { 00:30:05.066 "read": true, 00:30:05.066 "write": true, 00:30:05.066 "unmap": true, 00:30:05.066 "flush": true, 00:30:05.066 "reset": true, 00:30:05.066 "nvme_admin": true, 00:30:05.066 "nvme_io": true, 00:30:05.066 "nvme_io_md": false, 00:30:05.066 "write_zeroes": true, 00:30:05.066 "zcopy": false, 00:30:05.066 "get_zone_info": false, 00:30:05.066 "zone_management": false, 00:30:05.066 "zone_append": false, 00:30:05.066 "compare": true, 00:30:05.066 "compare_and_write": false, 00:30:05.066 "abort": true, 00:30:05.066 "seek_hole": false, 00:30:05.066 "seek_data": false, 00:30:05.066 "copy": true, 00:30:05.066 "nvme_iov_md": false 00:30:05.066 }, 00:30:05.066 "driver_specific": { 00:30:05.066 "nvme": [ 00:30:05.066 { 00:30:05.066 "pci_address": "0000:00:11.0", 00:30:05.066 "trid": { 00:30:05.066 "trtype": "PCIe", 00:30:05.066 "traddr": "0000:00:11.0" 00:30:05.066 }, 00:30:05.066 "ctrlr_data": { 00:30:05.066 "cntlid": 0, 00:30:05.066 "vendor_id": "0x1b36", 00:30:05.066 "model_number": "QEMU NVMe Ctrl", 00:30:05.066 "serial_number": "12341", 00:30:05.066 "firmware_revision": "8.0.0", 00:30:05.066 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:05.066 "oacs": { 00:30:05.066 "security": 0, 00:30:05.066 "format": 1, 00:30:05.066 "firmware": 0, 00:30:05.066 "ns_manage": 1 00:30:05.066 }, 00:30:05.066 "multi_ctrlr": false, 00:30:05.066 "ana_reporting": false 00:30:05.066 }, 00:30:05.066 "vs": { 00:30:05.066 "nvme_version": "1.4" 00:30:05.066 }, 00:30:05.066 "ns_data": { 00:30:05.066 "id": 1, 00:30:05.066 "can_share": false 00:30:05.066 } 00:30:05.066 } 00:30:05.066 ], 00:30:05.066 "mp_policy": "active_passive" 00:30:05.066 } 00:30:05.066 } 00:30:05.066 ]' 00:30:05.066 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:05.066 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:05.066 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:05.066 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:05.066 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:05.066 11:59:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:30:05.066 11:59:01 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:30:05.067 11:59:01 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:05.067 11:59:01 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:30:05.067 11:59:01 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:05.067 11:59:01 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:05.326 11:59:02 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=acd62709-62db-4028-967a-beaa0c526cb7 00:30:05.326 11:59:02 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:30:05.326 11:59:02 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u acd62709-62db-4028-967a-beaa0c526cb7 00:30:05.584 11:59:02 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:05.843 11:59:02 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=00ec056a-38ef-4cc9-bbcb-bb05c2148b43 00:30:05.843 11:59:02 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 00ec056a-38ef-4cc9-bbcb-bb05c2148b43 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:30:06.102 11:59:03 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.102 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.102 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:06.102 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:06.102 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:06.102 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.361 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:06.361 { 00:30:06.361 "name": "9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84", 00:30:06.361 "aliases": [ 00:30:06.361 "lvs/nvme0n1p0" 00:30:06.361 ], 00:30:06.361 "product_name": "Logical Volume", 00:30:06.361 "block_size": 4096, 00:30:06.361 "num_blocks": 26476544, 00:30:06.361 "uuid": "9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84", 00:30:06.361 "assigned_rate_limits": { 00:30:06.361 "rw_ios_per_sec": 0, 00:30:06.361 "rw_mbytes_per_sec": 0, 00:30:06.361 "r_mbytes_per_sec": 0, 00:30:06.361 "w_mbytes_per_sec": 0 00:30:06.361 }, 00:30:06.361 "claimed": false, 00:30:06.361 "zoned": false, 00:30:06.361 "supported_io_types": { 00:30:06.361 "read": true, 00:30:06.361 "write": true, 00:30:06.361 "unmap": true, 00:30:06.361 "flush": false, 00:30:06.361 "reset": true, 00:30:06.361 "nvme_admin": false, 00:30:06.361 "nvme_io": false, 00:30:06.361 "nvme_io_md": false, 00:30:06.361 "write_zeroes": true, 00:30:06.361 "zcopy": false, 00:30:06.361 "get_zone_info": false, 00:30:06.361 "zone_management": false, 00:30:06.361 "zone_append": false, 00:30:06.361 "compare": false, 00:30:06.361 "compare_and_write": false, 00:30:06.361 "abort": false, 00:30:06.361 "seek_hole": true, 00:30:06.361 "seek_data": true, 00:30:06.361 "copy": false, 00:30:06.361 "nvme_iov_md": false 00:30:06.361 }, 00:30:06.361 "driver_specific": { 00:30:06.361 "lvol": { 00:30:06.361 "lvol_store_uuid": "00ec056a-38ef-4cc9-bbcb-bb05c2148b43", 00:30:06.361 "base_bdev": "nvme0n1", 00:30:06.361 "thin_provision": true, 00:30:06.361 "num_allocated_clusters": 0, 00:30:06.361 "snapshot": false, 00:30:06.361 "clone": false, 00:30:06.361 "esnap_clone": false 00:30:06.361 } 00:30:06.361 } 00:30:06.361 } 00:30:06.361 ]' 00:30:06.361 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:06.619 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:06.619 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:06.619 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:06.619 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:06.619 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:30:06.619 11:59:03 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:30:06.619 11:59:03 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:30:06.619 11:59:03 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:06.878 11:59:03 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:06.878 11:59:03 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:06.878 11:59:03 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.878 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:06.878 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:06.878 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:06.878 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:06.878 11:59:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:07.136 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:07.136 { 00:30:07.136 "name": "9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84", 00:30:07.136 "aliases": [ 00:30:07.136 "lvs/nvme0n1p0" 00:30:07.136 ], 00:30:07.136 "product_name": "Logical Volume", 00:30:07.136 "block_size": 4096, 00:30:07.136 "num_blocks": 26476544, 00:30:07.136 "uuid": "9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84", 00:30:07.136 "assigned_rate_limits": { 00:30:07.136 "rw_ios_per_sec": 0, 00:30:07.136 "rw_mbytes_per_sec": 0, 00:30:07.136 "r_mbytes_per_sec": 0, 00:30:07.136 "w_mbytes_per_sec": 0 00:30:07.136 }, 00:30:07.136 "claimed": false, 00:30:07.136 "zoned": false, 00:30:07.136 "supported_io_types": { 00:30:07.136 "read": true, 00:30:07.136 "write": true, 00:30:07.136 "unmap": true, 00:30:07.136 "flush": false, 00:30:07.136 "reset": true, 00:30:07.136 "nvme_admin": false, 00:30:07.136 "nvme_io": false, 00:30:07.136 "nvme_io_md": false, 00:30:07.136 "write_zeroes": true, 00:30:07.136 "zcopy": false, 00:30:07.136 "get_zone_info": false, 00:30:07.136 "zone_management": false, 00:30:07.136 "zone_append": false, 00:30:07.136 "compare": false, 00:30:07.136 "compare_and_write": false, 00:30:07.136 "abort": false, 00:30:07.136 "seek_hole": true, 00:30:07.136 "seek_data": true, 00:30:07.136 "copy": false, 00:30:07.136 "nvme_iov_md": false 00:30:07.136 }, 00:30:07.136 "driver_specific": { 00:30:07.136 "lvol": { 00:30:07.136 "lvol_store_uuid": "00ec056a-38ef-4cc9-bbcb-bb05c2148b43", 00:30:07.136 "base_bdev": "nvme0n1", 00:30:07.136 "thin_provision": true, 00:30:07.136 "num_allocated_clusters": 0, 00:30:07.136 "snapshot": false, 00:30:07.136 "clone": false, 00:30:07.136 "esnap_clone": false 00:30:07.136 } 00:30:07.136 } 00:30:07.136 } 00:30:07.136 ]' 00:30:07.136 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:07.137 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:07.137 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:07.395 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:07.395 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:07.395 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:30:07.395 11:59:04 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:30:07.395 11:59:04 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:07.653 11:59:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:07.653 11:59:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:07.653 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:07.653 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:07.653 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:07.653 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:07.653 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 00:30:07.653 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:07.653 { 00:30:07.653 "name": "9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84", 00:30:07.653 "aliases": [ 00:30:07.653 "lvs/nvme0n1p0" 00:30:07.653 ], 00:30:07.653 "product_name": "Logical Volume", 00:30:07.653 "block_size": 4096, 00:30:07.653 "num_blocks": 26476544, 00:30:07.653 "uuid": "9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84", 00:30:07.653 "assigned_rate_limits": { 00:30:07.653 "rw_ios_per_sec": 0, 00:30:07.653 "rw_mbytes_per_sec": 0, 00:30:07.653 "r_mbytes_per_sec": 0, 00:30:07.653 "w_mbytes_per_sec": 0 00:30:07.653 }, 00:30:07.653 "claimed": false, 00:30:07.653 "zoned": false, 00:30:07.653 "supported_io_types": { 00:30:07.653 "read": true, 00:30:07.653 "write": true, 00:30:07.653 "unmap": true, 00:30:07.653 "flush": false, 00:30:07.653 "reset": true, 00:30:07.653 "nvme_admin": false, 00:30:07.653 "nvme_io": false, 00:30:07.653 "nvme_io_md": false, 00:30:07.653 "write_zeroes": true, 00:30:07.653 "zcopy": false, 00:30:07.653 "get_zone_info": false, 00:30:07.653 "zone_management": false, 00:30:07.653 "zone_append": false, 00:30:07.653 "compare": false, 00:30:07.653 "compare_and_write": false, 00:30:07.653 "abort": false, 00:30:07.653 "seek_hole": true, 00:30:07.653 "seek_data": true, 00:30:07.653 "copy": false, 00:30:07.653 "nvme_iov_md": false 00:30:07.653 }, 00:30:07.653 "driver_specific": { 00:30:07.653 "lvol": { 00:30:07.653 "lvol_store_uuid": "00ec056a-38ef-4cc9-bbcb-bb05c2148b43", 00:30:07.653 "base_bdev": "nvme0n1", 00:30:07.653 "thin_provision": true, 00:30:07.653 "num_allocated_clusters": 0, 00:30:07.653 "snapshot": false, 00:30:07.653 "clone": false, 00:30:07.653 "esnap_clone": false 00:30:07.653 } 00:30:07.653 } 00:30:07.653 } 00:30:07.653 ]' 00:30:07.653 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:07.912 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:07.912 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:07.912 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:07.913 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:07.913 11:59:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:30:07.913 11:59:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:07.913 11:59:04 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 --l2p_dram_limit 10' 00:30:07.913 11:59:04 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:07.913 11:59:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:07.913 11:59:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:07.913 11:59:04 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:30:07.913 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:30:07.913 11:59:04 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9b1d5cc1-08f8-4410-95e1-eac5bc9d3b84 --l2p_dram_limit 10 -c nvc0n1p0 00:30:08.199 [2024-07-25 11:59:04.974977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.199 [2024-07-25 11:59:04.975036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:08.199 [2024-07-25 11:59:04.975077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:08.199 [2024-07-25 11:59:04.975107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.199 [2024-07-25 11:59:04.975183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.199 [2024-07-25 11:59:04.975204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.199 [2024-07-25 11:59:04.975218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:08.199 [2024-07-25 11:59:04.975231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.199 [2024-07-25 11:59:04.975260] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:08.199 [2024-07-25 11:59:04.976245] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:08.199 [2024-07-25 11:59:04.976303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.199 [2024-07-25 11:59:04.976343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.199 [2024-07-25 11:59:04.976357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:30:08.199 [2024-07-25 11:59:04.976371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.199 [2024-07-25 11:59:04.976513] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a343668e-77a8-43d1-8f09-006b66036355 00:30:08.199 [2024-07-25 11:59:04.977554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.199 [2024-07-25 11:59:04.977596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:08.200 [2024-07-25 11:59:04.977634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:08.200 [2024-07-25 11:59:04.977647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.982217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.200 [2024-07-25 11:59:04.982262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.200 [2024-07-25 11:59:04.982299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:30:08.200 [2024-07-25 11:59:04.982311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.982427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.200 [2024-07-25 11:59:04.982447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.200 [2024-07-25 11:59:04.982462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:30:08.200 [2024-07-25 11:59:04.982474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.982564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.200 [2024-07-25 11:59:04.982593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:08.200 [2024-07-25 11:59:04.982630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:08.200 [2024-07-25 11:59:04.982642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.982679] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:08.200 [2024-07-25 11:59:04.987317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.200 [2024-07-25 11:59:04.987362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.200 [2024-07-25 11:59:04.987396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:30:08.200 [2024-07-25 11:59:04.987409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.987456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.200 [2024-07-25 11:59:04.987475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:08.200 [2024-07-25 11:59:04.987488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:08.200 [2024-07-25 11:59:04.987501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.987563] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:08.200 [2024-07-25 11:59:04.987753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:08.200 [2024-07-25 11:59:04.987776] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:08.200 [2024-07-25 11:59:04.987797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:08.200 [2024-07-25 11:59:04.987826] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:08.200 [2024-07-25 11:59:04.987843] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:08.200 [2024-07-25 11:59:04.987856] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:08.200 [2024-07-25 11:59:04.987875] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:08.200 [2024-07-25 11:59:04.987887] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:08.200 [2024-07-25 11:59:04.987900] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:08.200 [2024-07-25 11:59:04.987912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.200 [2024-07-25 11:59:04.987925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:08.200 [2024-07-25 11:59:04.987938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:30:08.200 [2024-07-25 11:59:04.987951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.988042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.200 [2024-07-25 11:59:04.988060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:08.200 [2024-07-25 11:59:04.988073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:08.200 [2024-07-25 11:59:04.988089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.200 [2024-07-25 11:59:04.988206] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:08.200 [2024-07-25 11:59:04.988227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:08.200 [2024-07-25 11:59:04.988251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:08.200 [2024-07-25 11:59:04.988290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:08.200 [2024-07-25 11:59:04.988323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.200 [2024-07-25 11:59:04.988348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:08.200 [2024-07-25 11:59:04.988361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:08.200 [2024-07-25 11:59:04.988371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.200 [2024-07-25 11:59:04.988383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:08.200 [2024-07-25 11:59:04.988393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:08.200 [2024-07-25 11:59:04.988406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:08.200 [2024-07-25 11:59:04.988431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:08.200 [2024-07-25 11:59:04.988464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:08.200 [2024-07-25 11:59:04.988499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:08.200 [2024-07-25 11:59:04.988531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:08.200 [2024-07-25 11:59:04.988565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:08.200 [2024-07-25 11:59:04.988598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.200 [2024-07-25 11:59:04.988623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:08.200 [2024-07-25 11:59:04.988636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:08.200 [2024-07-25 11:59:04.988647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.200 [2024-07-25 11:59:04.988659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:08.200 [2024-07-25 11:59:04.988671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:08.200 [2024-07-25 11:59:04.988682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:08.200 [2024-07-25 11:59:04.988719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:08.200 [2024-07-25 11:59:04.988733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988745] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:08.200 [2024-07-25 11:59:04.988757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:08.200 [2024-07-25 11:59:04.988777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.200 [2024-07-25 11:59:04.988803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:08.200 [2024-07-25 11:59:04.988813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:08.200 [2024-07-25 11:59:04.988827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:08.200 [2024-07-25 11:59:04.988838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:08.200 [2024-07-25 11:59:04.988850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:08.200 [2024-07-25 11:59:04.988860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:08.200 [2024-07-25 11:59:04.988876] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:08.200 [2024-07-25 11:59:04.988893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.200 [2024-07-25 11:59:04.988907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:08.200 [2024-07-25 11:59:04.988919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:08.200 [2024-07-25 11:59:04.988932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:08.200 [2024-07-25 11:59:04.988943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:08.200 [2024-07-25 11:59:04.988957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:08.200 [2024-07-25 11:59:04.988969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:08.201 [2024-07-25 11:59:04.988982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:08.201 [2024-07-25 11:59:04.988993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:08.201 [2024-07-25 11:59:04.989006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:08.201 [2024-07-25 11:59:04.989018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:08.201 [2024-07-25 11:59:04.989033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:08.201 [2024-07-25 11:59:04.989045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:08.201 [2024-07-25 11:59:04.989058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:08.201 [2024-07-25 11:59:04.989070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:08.201 [2024-07-25 11:59:04.989083] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:08.201 [2024-07-25 11:59:04.989096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.201 [2024-07-25 11:59:04.989110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.201 [2024-07-25 11:59:04.989121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:08.201 [2024-07-25 11:59:04.989134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:08.201 [2024-07-25 11:59:04.989146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:08.201 [2024-07-25 11:59:04.989160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.201 [2024-07-25 11:59:04.989172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:08.201 [2024-07-25 11:59:04.989186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:30:08.201 [2024-07-25 11:59:04.989197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.201 [2024-07-25 11:59:04.989251] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:08.201 [2024-07-25 11:59:04.989268] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:10.101 [2024-07-25 11:59:07.034107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.034381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:10.101 [2024-07-25 11:59:07.034532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2044.860 ms 00:30:10.101 [2024-07-25 11:59:07.034713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.101 [2024-07-25 11:59:07.068474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.068780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:10.101 [2024-07-25 11:59:07.068944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.411 ms 00:30:10.101 [2024-07-25 11:59:07.069002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.101 [2024-07-25 11:59:07.069295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.069452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:10.101 [2024-07-25 11:59:07.069635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:10.101 [2024-07-25 11:59:07.069786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.101 [2024-07-25 11:59:07.109137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.109400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:10.101 [2024-07-25 11:59:07.109545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.226 ms 00:30:10.101 [2024-07-25 11:59:07.109717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.101 [2024-07-25 11:59:07.109928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.109998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:10.101 [2024-07-25 11:59:07.110209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:10.101 [2024-07-25 11:59:07.110268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.101 [2024-07-25 11:59:07.110783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.110934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:10.101 [2024-07-25 11:59:07.111065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:30:10.101 [2024-07-25 11:59:07.111089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.101 [2024-07-25 11:59:07.111246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.111269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:10.101 [2024-07-25 11:59:07.111284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:30:10.101 [2024-07-25 11:59:07.111296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.101 [2024-07-25 11:59:07.129422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.101 [2024-07-25 11:59:07.129481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:10.101 [2024-07-25 11:59:07.129521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.095 ms 00:30:10.101 [2024-07-25 11:59:07.129549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.360 [2024-07-25 11:59:07.143589] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:10.360 [2024-07-25 11:59:07.146524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.360 [2024-07-25 11:59:07.146564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:10.360 [2024-07-25 11:59:07.146626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.852 ms 00:30:10.360 [2024-07-25 11:59:07.146641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.360 [2024-07-25 11:59:07.217156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.360 [2024-07-25 11:59:07.217414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:10.360 [2024-07-25 11:59:07.217449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.462 ms 00:30:10.360 [2024-07-25 11:59:07.217466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.360 [2024-07-25 11:59:07.217727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.360 [2024-07-25 11:59:07.217755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:10.360 [2024-07-25 11:59:07.217770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:30:10.360 [2024-07-25 11:59:07.217787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.360 [2024-07-25 11:59:07.249276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.360 [2024-07-25 11:59:07.249329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:10.360 [2024-07-25 11:59:07.249350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.393 ms 00:30:10.360 [2024-07-25 11:59:07.249369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.360 [2024-07-25 11:59:07.280651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.360 [2024-07-25 11:59:07.280779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:10.360 [2024-07-25 11:59:07.280803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.226 ms 00:30:10.360 [2024-07-25 11:59:07.280819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.360 [2024-07-25 11:59:07.281603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.360 [2024-07-25 11:59:07.281644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:10.360 [2024-07-25 11:59:07.281665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:30:10.360 [2024-07-25 11:59:07.281679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.360 [2024-07-25 11:59:07.371733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.360 [2024-07-25 11:59:07.371838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:10.360 [2024-07-25 11:59:07.371862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.949 ms 00:30:10.360 [2024-07-25 11:59:07.371880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.619 [2024-07-25 11:59:07.405579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.619 [2024-07-25 11:59:07.405684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:10.619 [2024-07-25 11:59:07.405740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.599 ms 00:30:10.619 [2024-07-25 11:59:07.405777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.619 [2024-07-25 11:59:07.437557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.619 [2024-07-25 11:59:07.437661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:10.619 [2024-07-25 11:59:07.437683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.667 ms 00:30:10.619 [2024-07-25 11:59:07.437697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.619 [2024-07-25 11:59:07.469793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.619 [2024-07-25 11:59:07.469896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:10.620 [2024-07-25 11:59:07.469919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.940 ms 00:30:10.620 [2024-07-25 11:59:07.469935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.620 [2024-07-25 11:59:07.470026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.620 [2024-07-25 11:59:07.470049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:10.620 [2024-07-25 11:59:07.470064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:10.620 [2024-07-25 11:59:07.470080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.620 [2024-07-25 11:59:07.470251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.620 [2024-07-25 11:59:07.470279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:10.620 [2024-07-25 11:59:07.470293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:10.620 [2024-07-25 11:59:07.470306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.620 [2024-07-25 11:59:07.471578] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2496.059 ms, result 0 00:30:10.620 { 00:30:10.620 "name": "ftl0", 00:30:10.620 "uuid": "a343668e-77a8-43d1-8f09-006b66036355" 00:30:10.620 } 00:30:10.620 11:59:07 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:10.620 11:59:07 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:10.878 11:59:07 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:30:10.878 11:59:07 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:11.137 [2024-07-25 11:59:08.002951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.003240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:11.137 [2024-07-25 11:59:08.003453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:11.137 [2024-07-25 11:59:08.003480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.137 [2024-07-25 11:59:08.003535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:11.137 [2024-07-25 11:59:08.006950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.007009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:11.137 [2024-07-25 11:59:08.007027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.388 ms 00:30:11.137 [2024-07-25 11:59:08.007041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.137 [2024-07-25 11:59:08.007355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.007384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:11.137 [2024-07-25 11:59:08.007409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:30:11.137 [2024-07-25 11:59:08.007422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.137 [2024-07-25 11:59:08.010816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.010855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:11.137 [2024-07-25 11:59:08.010871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:30:11.137 [2024-07-25 11:59:08.010885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.137 [2024-07-25 11:59:08.017244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.017283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:11.137 [2024-07-25 11:59:08.017315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.331 ms 00:30:11.137 [2024-07-25 11:59:08.017328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.137 [2024-07-25 11:59:08.048517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.048582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:11.137 [2024-07-25 11:59:08.048634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.100 ms 00:30:11.137 [2024-07-25 11:59:08.048648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.137 [2024-07-25 11:59:08.067858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.067945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:11.137 [2024-07-25 11:59:08.067965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.126 ms 00:30:11.137 [2024-07-25 11:59:08.067981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.137 [2024-07-25 11:59:08.068182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.137 [2024-07-25 11:59:08.068209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:11.137 [2024-07-25 11:59:08.068223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:30:11.137 [2024-07-25 11:59:08.068237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.138 [2024-07-25 11:59:08.098909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.138 [2024-07-25 11:59:08.098960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:11.138 [2024-07-25 11:59:08.098993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.644 ms 00:30:11.138 [2024-07-25 11:59:08.099013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.138 [2024-07-25 11:59:08.130118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.138 [2024-07-25 11:59:08.130187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:11.138 [2024-07-25 11:59:08.130206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.041 ms 00:30:11.138 [2024-07-25 11:59:08.130220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.138 [2024-07-25 11:59:08.161598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.138 [2024-07-25 11:59:08.161666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:11.138 [2024-07-25 11:59:08.161684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.327 ms 00:30:11.138 [2024-07-25 11:59:08.161697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.398 [2024-07-25 11:59:08.192045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.398 [2024-07-25 11:59:08.192109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:11.398 [2024-07-25 11:59:08.192127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.180 ms 00:30:11.398 [2024-07-25 11:59:08.192141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.398 [2024-07-25 11:59:08.192190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:11.398 [2024-07-25 11:59:08.192218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:11.398 [2024-07-25 11:59:08.192838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.192992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:11.399 [2024-07-25 11:59:08.193635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:11.399 [2024-07-25 11:59:08.193648] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a343668e-77a8-43d1-8f09-006b66036355 00:30:11.399 [2024-07-25 11:59:08.193662] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:11.399 [2024-07-25 11:59:08.193674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:11.399 [2024-07-25 11:59:08.193701] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:11.399 [2024-07-25 11:59:08.193716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:11.399 [2024-07-25 11:59:08.193730] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:11.399 [2024-07-25 11:59:08.193754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:11.399 [2024-07-25 11:59:08.193768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:11.399 [2024-07-25 11:59:08.193779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:11.399 [2024-07-25 11:59:08.193792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:11.399 [2024-07-25 11:59:08.193804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.399 [2024-07-25 11:59:08.193826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:11.399 [2024-07-25 11:59:08.193839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:30:11.399 [2024-07-25 11:59:08.193855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.399 [2024-07-25 11:59:08.210542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.399 [2024-07-25 11:59:08.210611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:11.399 [2024-07-25 11:59:08.210631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.620 ms 00:30:11.399 [2024-07-25 11:59:08.210645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.399 [2024-07-25 11:59:08.211119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.399 [2024-07-25 11:59:08.211161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:11.399 [2024-07-25 11:59:08.211182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:30:11.399 [2024-07-25 11:59:08.211196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.399 [2024-07-25 11:59:08.264393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.399 [2024-07-25 11:59:08.264461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.399 [2024-07-25 11:59:08.264479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.399 [2024-07-25 11:59:08.264492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.399 [2024-07-25 11:59:08.264561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.399 [2024-07-25 11:59:08.264580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.399 [2024-07-25 11:59:08.264595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.399 [2024-07-25 11:59:08.264607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.399 [2024-07-25 11:59:08.264766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.400 [2024-07-25 11:59:08.264808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.400 [2024-07-25 11:59:08.264822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.400 [2024-07-25 11:59:08.264852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.400 [2024-07-25 11:59:08.264881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.400 [2024-07-25 11:59:08.264901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.400 [2024-07-25 11:59:08.264915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.400 [2024-07-25 11:59:08.264932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.400 [2024-07-25 11:59:08.359207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.400 [2024-07-25 11:59:08.359291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.400 [2024-07-25 11:59:08.359309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.400 [2024-07-25 11:59:08.359323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.441446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.659 [2024-07-25 11:59:08.441524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.659 [2024-07-25 11:59:08.441546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.659 [2024-07-25 11:59:08.441560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.441664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.659 [2024-07-25 11:59:08.441688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.659 [2024-07-25 11:59:08.441701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.659 [2024-07-25 11:59:08.441759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.441865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.659 [2024-07-25 11:59:08.441900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.659 [2024-07-25 11:59:08.441914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.659 [2024-07-25 11:59:08.441927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.442072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.659 [2024-07-25 11:59:08.442096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.659 [2024-07-25 11:59:08.442126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.659 [2024-07-25 11:59:08.442140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.442199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.659 [2024-07-25 11:59:08.442223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:11.659 [2024-07-25 11:59:08.442237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.659 [2024-07-25 11:59:08.442251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.442303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.659 [2024-07-25 11:59:08.442330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.659 [2024-07-25 11:59:08.442344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.659 [2024-07-25 11:59:08.442358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.442415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.659 [2024-07-25 11:59:08.442438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.659 [2024-07-25 11:59:08.442451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.659 [2024-07-25 11:59:08.442465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.659 [2024-07-25 11:59:08.442633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.637 ms, result 0 00:30:11.659 true 00:30:11.659 11:59:08 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80179 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80179 ']' 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80179 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80179 00:30:11.659 killing process with pid 80179 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80179' 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 80179 00:30:11.659 11:59:08 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 80179 00:30:16.927 11:59:13 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:21.118 262144+0 records in 00:30:21.118 262144+0 records out 00:30:21.118 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.72175 s, 227 MB/s 00:30:21.118 11:59:18 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:23.647 11:59:20 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:23.647 [2024-07-25 11:59:20.297592] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:23.647 [2024-07-25 11:59:20.297809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80415 ] 00:30:23.647 [2024-07-25 11:59:20.460201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.647 [2024-07-25 11:59:20.644480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.256 [2024-07-25 11:59:20.951531] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:24.256 [2024-07-25 11:59:20.951612] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:24.256 [2024-07-25 11:59:21.110944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.111006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:24.256 [2024-07-25 11:59:21.111028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:24.256 [2024-07-25 11:59:21.111040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.111104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.111123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:24.256 [2024-07-25 11:59:21.111136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:24.256 [2024-07-25 11:59:21.111151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.111186] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:24.256 [2024-07-25 11:59:21.112143] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:24.256 [2024-07-25 11:59:21.112190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.112205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:24.256 [2024-07-25 11:59:21.112217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:30:24.256 [2024-07-25 11:59:21.112227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.113334] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:24.256 [2024-07-25 11:59:21.129554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.129597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:24.256 [2024-07-25 11:59:21.129615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.221 ms 00:30:24.256 [2024-07-25 11:59:21.129626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.129718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.129742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:24.256 [2024-07-25 11:59:21.129755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:24.256 [2024-07-25 11:59:21.129766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.134227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.134274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:24.256 [2024-07-25 11:59:21.134289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:30:24.256 [2024-07-25 11:59:21.134301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.134401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.134421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:24.256 [2024-07-25 11:59:21.134434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:24.256 [2024-07-25 11:59:21.134445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.134510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.134528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:24.256 [2024-07-25 11:59:21.134540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:24.256 [2024-07-25 11:59:21.134551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.134596] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:24.256 [2024-07-25 11:59:21.138831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.138868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:24.256 [2024-07-25 11:59:21.138884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.245 ms 00:30:24.256 [2024-07-25 11:59:21.138899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.138940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.138956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:24.256 [2024-07-25 11:59:21.138968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:24.256 [2024-07-25 11:59:21.138979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.139025] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:24.256 [2024-07-25 11:59:21.139056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:24.256 [2024-07-25 11:59:21.139100] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:24.256 [2024-07-25 11:59:21.139130] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:24.256 [2024-07-25 11:59:21.139237] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:24.256 [2024-07-25 11:59:21.139253] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:24.256 [2024-07-25 11:59:21.139268] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:24.256 [2024-07-25 11:59:21.139282] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:24.256 [2024-07-25 11:59:21.139295] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:24.256 [2024-07-25 11:59:21.139307] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:24.256 [2024-07-25 11:59:21.139317] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:24.256 [2024-07-25 11:59:21.139328] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:24.256 [2024-07-25 11:59:21.139339] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:24.256 [2024-07-25 11:59:21.139354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.139365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:24.256 [2024-07-25 11:59:21.139377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:30:24.256 [2024-07-25 11:59:21.139387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.139477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.256 [2024-07-25 11:59:21.139492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:24.256 [2024-07-25 11:59:21.139504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:24.256 [2024-07-25 11:59:21.139514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.256 [2024-07-25 11:59:21.139620] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:24.256 [2024-07-25 11:59:21.139642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:24.256 [2024-07-25 11:59:21.139655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:24.256 [2024-07-25 11:59:21.139666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.256 [2024-07-25 11:59:21.139677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:24.256 [2024-07-25 11:59:21.139687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:24.256 [2024-07-25 11:59:21.139720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:24.256 [2024-07-25 11:59:21.139733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:24.256 [2024-07-25 11:59:21.139744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:24.256 [2024-07-25 11:59:21.139754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:24.256 [2024-07-25 11:59:21.139765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:24.256 [2024-07-25 11:59:21.139775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:24.256 [2024-07-25 11:59:21.139785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:24.256 [2024-07-25 11:59:21.139796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:24.256 [2024-07-25 11:59:21.139806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:24.257 [2024-07-25 11:59:21.139816] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.257 [2024-07-25 11:59:21.139826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:24.257 [2024-07-25 11:59:21.139836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:24.257 [2024-07-25 11:59:21.139846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.257 [2024-07-25 11:59:21.139856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:24.257 [2024-07-25 11:59:21.139879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:24.257 [2024-07-25 11:59:21.139889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.257 [2024-07-25 11:59:21.139899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:24.257 [2024-07-25 11:59:21.139909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:24.257 [2024-07-25 11:59:21.139918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.257 [2024-07-25 11:59:21.139928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:24.257 [2024-07-25 11:59:21.139938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:24.257 [2024-07-25 11:59:21.139947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.257 [2024-07-25 11:59:21.139957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:24.257 [2024-07-25 11:59:21.139967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:24.257 [2024-07-25 11:59:21.139976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:24.257 [2024-07-25 11:59:21.139986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:24.257 [2024-07-25 11:59:21.139996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:24.257 [2024-07-25 11:59:21.140006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:24.257 [2024-07-25 11:59:21.140015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:24.257 [2024-07-25 11:59:21.140025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:24.257 [2024-07-25 11:59:21.140035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:24.257 [2024-07-25 11:59:21.140045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:24.257 [2024-07-25 11:59:21.140055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:24.257 [2024-07-25 11:59:21.140064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.257 [2024-07-25 11:59:21.140074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:24.257 [2024-07-25 11:59:21.140084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:24.257 [2024-07-25 11:59:21.140094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.257 [2024-07-25 11:59:21.140103] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:24.257 [2024-07-25 11:59:21.140116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:24.257 [2024-07-25 11:59:21.140127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:24.257 [2024-07-25 11:59:21.140139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:24.257 [2024-07-25 11:59:21.140150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:24.257 [2024-07-25 11:59:21.140160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:24.257 [2024-07-25 11:59:21.140170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:24.257 [2024-07-25 11:59:21.140181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:24.257 [2024-07-25 11:59:21.140191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:24.257 [2024-07-25 11:59:21.140201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:24.257 [2024-07-25 11:59:21.140212] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:24.257 [2024-07-25 11:59:21.140225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.257 [2024-07-25 11:59:21.140237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:24.257 [2024-07-25 11:59:21.140248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:24.257 [2024-07-25 11:59:21.140258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:24.257 [2024-07-25 11:59:21.140269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:24.257 [2024-07-25 11:59:21.140280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:24.257 [2024-07-25 11:59:21.140291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:24.257 [2024-07-25 11:59:21.140302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:24.257 [2024-07-25 11:59:21.140312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:24.257 [2024-07-25 11:59:21.140323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:24.257 [2024-07-25 11:59:21.140334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:24.257 [2024-07-25 11:59:21.140345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:24.257 [2024-07-25 11:59:21.140356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:24.257 [2024-07-25 11:59:21.140367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:24.257 [2024-07-25 11:59:21.140378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:24.257 [2024-07-25 11:59:21.140389] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:24.257 [2024-07-25 11:59:21.140405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.257 [2024-07-25 11:59:21.140417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:24.257 [2024-07-25 11:59:21.140428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:24.257 [2024-07-25 11:59:21.140439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:24.257 [2024-07-25 11:59:21.140450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:24.257 [2024-07-25 11:59:21.140462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.140475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:24.257 [2024-07-25 11:59:21.140486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:30:24.257 [2024-07-25 11:59:21.140497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.180471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.180535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:24.257 [2024-07-25 11:59:21.180557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.886 ms 00:30:24.257 [2024-07-25 11:59:21.180569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.180709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.180729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:24.257 [2024-07-25 11:59:21.180754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:24.257 [2024-07-25 11:59:21.180767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.218895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.218956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:24.257 [2024-07-25 11:59:21.218975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.029 ms 00:30:24.257 [2024-07-25 11:59:21.218987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.219055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.219072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:24.257 [2024-07-25 11:59:21.219085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:24.257 [2024-07-25 11:59:21.219101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.219493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.219513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:24.257 [2024-07-25 11:59:21.219525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:30:24.257 [2024-07-25 11:59:21.219536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.219717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.219738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:24.257 [2024-07-25 11:59:21.219750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:30:24.257 [2024-07-25 11:59:21.219777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.235684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.235742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:24.257 [2024-07-25 11:59:21.235765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.878 ms 00:30:24.257 [2024-07-25 11:59:21.235776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.257 [2024-07-25 11:59:21.251975] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:24.257 [2024-07-25 11:59:21.252018] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:24.257 [2024-07-25 11:59:21.252036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.257 [2024-07-25 11:59:21.252048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:24.257 [2024-07-25 11:59:21.252061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.122 ms 00:30:24.257 [2024-07-25 11:59:21.252072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.258 [2024-07-25 11:59:21.283221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.258 [2024-07-25 11:59:21.283280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:24.258 [2024-07-25 11:59:21.283304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.100 ms 00:30:24.258 [2024-07-25 11:59:21.283317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.515 [2024-07-25 11:59:21.298963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.515 [2024-07-25 11:59:21.299006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:24.515 [2024-07-25 11:59:21.299023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.591 ms 00:30:24.515 [2024-07-25 11:59:21.299034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.515 [2024-07-25 11:59:21.314390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.515 [2024-07-25 11:59:21.314439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:24.515 [2024-07-25 11:59:21.314456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.313 ms 00:30:24.515 [2024-07-25 11:59:21.314467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.515 [2024-07-25 11:59:21.315288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.515 [2024-07-25 11:59:21.315329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:24.515 [2024-07-25 11:59:21.315344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:30:24.515 [2024-07-25 11:59:21.315355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.515 [2024-07-25 11:59:21.387011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.515 [2024-07-25 11:59:21.387086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:24.515 [2024-07-25 11:59:21.387106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.625 ms 00:30:24.515 [2024-07-25 11:59:21.387126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.515 [2024-07-25 11:59:21.399780] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:24.515 [2024-07-25 11:59:21.402442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.516 [2024-07-25 11:59:21.402478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:24.516 [2024-07-25 11:59:21.402495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.241 ms 00:30:24.516 [2024-07-25 11:59:21.402506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.516 [2024-07-25 11:59:21.402631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.516 [2024-07-25 11:59:21.402653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:24.516 [2024-07-25 11:59:21.402666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:24.516 [2024-07-25 11:59:21.402678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.516 [2024-07-25 11:59:21.402852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.516 [2024-07-25 11:59:21.402874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:24.516 [2024-07-25 11:59:21.402887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:24.516 [2024-07-25 11:59:21.402898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.516 [2024-07-25 11:59:21.402932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.516 [2024-07-25 11:59:21.402947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:24.516 [2024-07-25 11:59:21.402959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:24.516 [2024-07-25 11:59:21.402969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.516 [2024-07-25 11:59:21.403010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:24.516 [2024-07-25 11:59:21.403027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.516 [2024-07-25 11:59:21.403042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:24.516 [2024-07-25 11:59:21.403053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:24.516 [2024-07-25 11:59:21.403064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.516 [2024-07-25 11:59:21.433920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.516 [2024-07-25 11:59:21.433976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:24.516 [2024-07-25 11:59:21.433995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.831 ms 00:30:24.516 [2024-07-25 11:59:21.434014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.516 [2024-07-25 11:59:21.434104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:24.516 [2024-07-25 11:59:21.434123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:24.516 [2024-07-25 11:59:21.434136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:24.516 [2024-07-25 11:59:21.434147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:24.516 [2024-07-25 11:59:21.435307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 323.875 ms, result 0 00:31:02.352  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 83/1024 [MB] (27 MBps) Copying: 111/1024 [MB] (27 MBps) Copying: 138/1024 [MB] (26 MBps) Copying: 163/1024 [MB] (25 MBps) Copying: 189/1024 [MB] (25 MBps) Copying: 215/1024 [MB] (26 MBps) Copying: 242/1024 [MB] (26 MBps) Copying: 270/1024 [MB] (27 MBps) Copying: 298/1024 [MB] (28 MBps) Copying: 325/1024 [MB] (27 MBps) Copying: 351/1024 [MB] (26 MBps) Copying: 378/1024 [MB] (26 MBps) Copying: 404/1024 [MB] (26 MBps) Copying: 431/1024 [MB] (26 MBps) Copying: 457/1024 [MB] (26 MBps) Copying: 484/1024 [MB] (26 MBps) Copying: 510/1024 [MB] (26 MBps) Copying: 537/1024 [MB] (27 MBps) Copying: 565/1024 [MB] (27 MBps) Copying: 592/1024 [MB] (27 MBps) Copying: 620/1024 [MB] (28 MBps) Copying: 647/1024 [MB] (27 MBps) Copying: 675/1024 [MB] (27 MBps) Copying: 703/1024 [MB] (27 MBps) Copying: 730/1024 [MB] (26 MBps) Copying: 757/1024 [MB] (27 MBps) Copying: 784/1024 [MB] (26 MBps) Copying: 811/1024 [MB] (26 MBps) Copying: 839/1024 [MB] (28 MBps) Copying: 867/1024 [MB] (27 MBps) Copying: 894/1024 [MB] (27 MBps) Copying: 922/1024 [MB] (28 MBps) Copying: 950/1024 [MB] (27 MBps) Copying: 978/1024 [MB] (27 MBps) Copying: 1006/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-25 11:59:59.118600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.118675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:02.352 [2024-07-25 11:59:59.118720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:02.352 [2024-07-25 11:59:59.118736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.118770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:02.352 [2024-07-25 11:59:59.122106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.122145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:02.352 [2024-07-25 11:59:59.122162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.300 ms 00:31:02.352 [2024-07-25 11:59:59.122181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.123748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.123792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:02.352 [2024-07-25 11:59:59.123809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:31:02.352 [2024-07-25 11:59:59.123821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.140027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.140125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:02.352 [2024-07-25 11:59:59.140146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.175 ms 00:31:02.352 [2024-07-25 11:59:59.140158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.146936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.146973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:02.352 [2024-07-25 11:59:59.146989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.695 ms 00:31:02.352 [2024-07-25 11:59:59.147001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.178437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.178484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:02.352 [2024-07-25 11:59:59.178501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.323 ms 00:31:02.352 [2024-07-25 11:59:59.178512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.196375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.196441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:02.352 [2024-07-25 11:59:59.196460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.816 ms 00:31:02.352 [2024-07-25 11:59:59.196472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.196633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.196655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:02.352 [2024-07-25 11:59:59.196673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:31:02.352 [2024-07-25 11:59:59.196685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.228458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.228512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:02.352 [2024-07-25 11:59:59.228532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.720 ms 00:31:02.352 [2024-07-25 11:59:59.228543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.259762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.259812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:02.352 [2024-07-25 11:59:59.259829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.166 ms 00:31:02.352 [2024-07-25 11:59:59.259840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.290671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.290729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:02.352 [2024-07-25 11:59:59.290748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.783 ms 00:31:02.352 [2024-07-25 11:59:59.290774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.321905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.352 [2024-07-25 11:59:59.321981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:02.352 [2024-07-25 11:59:59.322000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.034 ms 00:31:02.352 [2024-07-25 11:59:59.322012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.352 [2024-07-25 11:59:59.322087] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:02.352 [2024-07-25 11:59:59.322113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:02.352 [2024-07-25 11:59:59.322408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.322989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:02.353 [2024-07-25 11:59:59.323359] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:02.353 [2024-07-25 11:59:59.323370] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a343668e-77a8-43d1-8f09-006b66036355 00:31:02.353 [2024-07-25 11:59:59.323391] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:02.353 [2024-07-25 11:59:59.323402] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:02.353 [2024-07-25 11:59:59.323412] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:02.354 [2024-07-25 11:59:59.323424] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:02.354 [2024-07-25 11:59:59.323434] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:02.354 [2024-07-25 11:59:59.323445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:02.354 [2024-07-25 11:59:59.323456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:02.354 [2024-07-25 11:59:59.323466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:02.354 [2024-07-25 11:59:59.323476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:02.354 [2024-07-25 11:59:59.323487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.354 [2024-07-25 11:59:59.323498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:02.354 [2024-07-25 11:59:59.323514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:31:02.354 [2024-07-25 11:59:59.323525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.354 [2024-07-25 11:59:59.340420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.354 [2024-07-25 11:59:59.340479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:02.354 [2024-07-25 11:59:59.340498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.839 ms 00:31:02.354 [2024-07-25 11:59:59.340525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.354 [2024-07-25 11:59:59.341001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.354 [2024-07-25 11:59:59.341032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:02.354 [2024-07-25 11:59:59.341047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:31:02.354 [2024-07-25 11:59:59.341058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.354 [2024-07-25 11:59:59.377984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.354 [2024-07-25 11:59:59.378032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.354 [2024-07-25 11:59:59.378049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.354 [2024-07-25 11:59:59.378060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.354 [2024-07-25 11:59:59.378127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.354 [2024-07-25 11:59:59.378142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.354 [2024-07-25 11:59:59.378154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.354 [2024-07-25 11:59:59.378165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.354 [2024-07-25 11:59:59.378285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.354 [2024-07-25 11:59:59.378306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.354 [2024-07-25 11:59:59.378319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.354 [2024-07-25 11:59:59.378330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.354 [2024-07-25 11:59:59.378352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.354 [2024-07-25 11:59:59.378366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.354 [2024-07-25 11:59:59.378377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.354 [2024-07-25 11:59:59.378388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.476830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.476902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.613 [2024-07-25 11:59:59.476920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.476932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.561090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.561158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:02.613 [2024-07-25 11:59:59.561177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.561189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.561279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.561296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:02.613 [2024-07-25 11:59:59.561309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.561320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.561399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.561418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:02.613 [2024-07-25 11:59:59.561430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.561441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.561563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.561590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:02.613 [2024-07-25 11:59:59.561602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.561613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.561670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.561716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:02.613 [2024-07-25 11:59:59.561732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.561743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.561788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.561811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:02.613 [2024-07-25 11:59:59.561823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.561834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.561886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.613 [2024-07-25 11:59:59.561904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:02.613 [2024-07-25 11:59:59.561915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.613 [2024-07-25 11:59:59.561926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.613 [2024-07-25 11:59:59.562065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 443.453 ms, result 0 00:31:03.986 00:31:03.986 00:31:03.986 12:00:00 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:03.986 [2024-07-25 12:00:00.799309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:03.986 [2024-07-25 12:00:00.799484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80827 ] 00:31:03.986 [2024-07-25 12:00:00.965078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.273 [2024-07-25 12:00:01.150319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.529 [2024-07-25 12:00:01.459630] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:04.529 [2024-07-25 12:00:01.459736] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:04.786 [2024-07-25 12:00:01.620181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.786 [2024-07-25 12:00:01.620281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:04.786 [2024-07-25 12:00:01.620304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:04.787 [2024-07-25 12:00:01.620317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.620416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.620441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:04.787 [2024-07-25 12:00:01.620454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:04.787 [2024-07-25 12:00:01.620469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.620511] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:04.787 [2024-07-25 12:00:01.621479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:04.787 [2024-07-25 12:00:01.621524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.621538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:04.787 [2024-07-25 12:00:01.621551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:31:04.787 [2024-07-25 12:00:01.621562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.622787] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:04.787 [2024-07-25 12:00:01.639154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.639200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:04.787 [2024-07-25 12:00:01.639218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.368 ms 00:31:04.787 [2024-07-25 12:00:01.639230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.639312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.639335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:04.787 [2024-07-25 12:00:01.639349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:04.787 [2024-07-25 12:00:01.639359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.643718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.643761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:04.787 [2024-07-25 12:00:01.643777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.263 ms 00:31:04.787 [2024-07-25 12:00:01.643789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.643889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.643909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:04.787 [2024-07-25 12:00:01.643922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:04.787 [2024-07-25 12:00:01.643932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.643999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.644017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:04.787 [2024-07-25 12:00:01.644030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:04.787 [2024-07-25 12:00:01.644040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.644075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:04.787 [2024-07-25 12:00:01.648317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.648357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:04.787 [2024-07-25 12:00:01.648374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.252 ms 00:31:04.787 [2024-07-25 12:00:01.648385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.648430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.648447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:04.787 [2024-07-25 12:00:01.648459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:04.787 [2024-07-25 12:00:01.648470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.648517] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:04.787 [2024-07-25 12:00:01.648548] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:04.787 [2024-07-25 12:00:01.648592] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:04.787 [2024-07-25 12:00:01.648614] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:04.787 [2024-07-25 12:00:01.648734] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:04.787 [2024-07-25 12:00:01.648752] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:04.787 [2024-07-25 12:00:01.648766] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:04.787 [2024-07-25 12:00:01.648780] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:04.787 [2024-07-25 12:00:01.648794] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:04.787 [2024-07-25 12:00:01.648806] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:04.787 [2024-07-25 12:00:01.648817] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:04.787 [2024-07-25 12:00:01.648827] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:04.787 [2024-07-25 12:00:01.648838] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:04.787 [2024-07-25 12:00:01.648849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.648866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:04.787 [2024-07-25 12:00:01.648878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:31:04.787 [2024-07-25 12:00:01.648888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.648978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.648993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:04.787 [2024-07-25 12:00:01.649005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:04.787 [2024-07-25 12:00:01.649015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.649123] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:04.787 [2024-07-25 12:00:01.649146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:04.787 [2024-07-25 12:00:01.649165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:04.787 [2024-07-25 12:00:01.649198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:04.787 [2024-07-25 12:00:01.649229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:04.787 [2024-07-25 12:00:01.649249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:04.787 [2024-07-25 12:00:01.649259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:04.787 [2024-07-25 12:00:01.649269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:04.787 [2024-07-25 12:00:01.649279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:04.787 [2024-07-25 12:00:01.649290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:04.787 [2024-07-25 12:00:01.649300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:04.787 [2024-07-25 12:00:01.649324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:04.787 [2024-07-25 12:00:01.649367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:04.787 [2024-07-25 12:00:01.649398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:04.787 [2024-07-25 12:00:01.649430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:04.787 [2024-07-25 12:00:01.649460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:04.787 [2024-07-25 12:00:01.649490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:04.787 [2024-07-25 12:00:01.649510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:04.787 [2024-07-25 12:00:01.649520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:04.787 [2024-07-25 12:00:01.649530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:04.787 [2024-07-25 12:00:01.649540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:04.787 [2024-07-25 12:00:01.649551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:04.787 [2024-07-25 12:00:01.649560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:04.787 [2024-07-25 12:00:01.649581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:04.787 [2024-07-25 12:00:01.649591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649601] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:04.787 [2024-07-25 12:00:01.649612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:04.787 [2024-07-25 12:00:01.649623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:04.787 [2024-07-25 12:00:01.649645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:04.787 [2024-07-25 12:00:01.649656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:04.787 [2024-07-25 12:00:01.649666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:04.787 [2024-07-25 12:00:01.649677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:04.787 [2024-07-25 12:00:01.649687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:04.787 [2024-07-25 12:00:01.649713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:04.787 [2024-07-25 12:00:01.649725] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:04.787 [2024-07-25 12:00:01.649739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:04.787 [2024-07-25 12:00:01.649752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:04.787 [2024-07-25 12:00:01.649763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:04.787 [2024-07-25 12:00:01.649774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:04.787 [2024-07-25 12:00:01.649785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:04.787 [2024-07-25 12:00:01.649796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:04.787 [2024-07-25 12:00:01.649807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:04.787 [2024-07-25 12:00:01.649818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:04.787 [2024-07-25 12:00:01.649829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:04.787 [2024-07-25 12:00:01.649840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:04.787 [2024-07-25 12:00:01.649850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:04.787 [2024-07-25 12:00:01.649861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:04.787 [2024-07-25 12:00:01.649872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:04.787 [2024-07-25 12:00:01.649883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:04.787 [2024-07-25 12:00:01.649894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:04.787 [2024-07-25 12:00:01.649905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:04.787 [2024-07-25 12:00:01.649918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:04.787 [2024-07-25 12:00:01.649935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:04.787 [2024-07-25 12:00:01.649946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:04.787 [2024-07-25 12:00:01.649957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:04.787 [2024-07-25 12:00:01.649969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:04.787 [2024-07-25 12:00:01.649981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.649992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:04.787 [2024-07-25 12:00:01.650003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:31:04.787 [2024-07-25 12:00:01.650014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.700827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.700885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:04.787 [2024-07-25 12:00:01.700907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.723 ms 00:31:04.787 [2024-07-25 12:00:01.700919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.701042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.701060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:04.787 [2024-07-25 12:00:01.701072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:04.787 [2024-07-25 12:00:01.701083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.739615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.739679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:04.787 [2024-07-25 12:00:01.739717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.434 ms 00:31:04.787 [2024-07-25 12:00:01.739730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.739803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.739821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:04.787 [2024-07-25 12:00:01.739834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:04.787 [2024-07-25 12:00:01.739851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.740257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.740283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:04.787 [2024-07-25 12:00:01.740297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:31:04.787 [2024-07-25 12:00:01.740308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.740463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.740483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:04.787 [2024-07-25 12:00:01.740495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:31:04.787 [2024-07-25 12:00:01.740506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.756500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.756548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:04.787 [2024-07-25 12:00:01.756567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.963 ms 00:31:04.787 [2024-07-25 12:00:01.756583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.773028] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:04.787 [2024-07-25 12:00:01.773076] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:04.787 [2024-07-25 12:00:01.773095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.773108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:04.787 [2024-07-25 12:00:01.773121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.334 ms 00:31:04.787 [2024-07-25 12:00:01.773132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.803009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.803074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:04.787 [2024-07-25 12:00:01.803094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.823 ms 00:31:04.787 [2024-07-25 12:00:01.803105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.787 [2024-07-25 12:00:01.818972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.787 [2024-07-25 12:00:01.819023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:04.787 [2024-07-25 12:00:01.819042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.789 ms 00:31:04.787 [2024-07-25 12:00:01.819053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.834461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.834510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:05.044 [2024-07-25 12:00:01.834527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.363 ms 00:31:05.044 [2024-07-25 12:00:01.834538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.835391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.835430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:05.044 [2024-07-25 12:00:01.835446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:31:05.044 [2024-07-25 12:00:01.835457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.907896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.907964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:05.044 [2024-07-25 12:00:01.907984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.408 ms 00:31:05.044 [2024-07-25 12:00:01.908004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.920716] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:05.044 [2024-07-25 12:00:01.923409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.923449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:05.044 [2024-07-25 12:00:01.923469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.335 ms 00:31:05.044 [2024-07-25 12:00:01.923480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.923603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.923624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:05.044 [2024-07-25 12:00:01.923638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:05.044 [2024-07-25 12:00:01.923650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.923767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.923800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:05.044 [2024-07-25 12:00:01.923813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:05.044 [2024-07-25 12:00:01.923825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.923857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.923873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:05.044 [2024-07-25 12:00:01.923885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:05.044 [2024-07-25 12:00:01.923896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.923936] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:05.044 [2024-07-25 12:00:01.923953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.923969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:05.044 [2024-07-25 12:00:01.923980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:05.044 [2024-07-25 12:00:01.923991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.955318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.955381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:05.044 [2024-07-25 12:00:01.955402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.291 ms 00:31:05.044 [2024-07-25 12:00:01.955421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.955517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.044 [2024-07-25 12:00:01.955537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:05.044 [2024-07-25 12:00:01.955550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:05.044 [2024-07-25 12:00:01.955561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.044 [2024-07-25 12:00:01.956733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.047 ms, result 0 00:31:43.469  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 83/1024 [MB] (26 MBps) Copying: 110/1024 [MB] (27 MBps) Copying: 137/1024 [MB] (27 MBps) Copying: 163/1024 [MB] (25 MBps) Copying: 189/1024 [MB] (26 MBps) Copying: 216/1024 [MB] (27 MBps) Copying: 243/1024 [MB] (26 MBps) Copying: 270/1024 [MB] (27 MBps) Copying: 296/1024 [MB] (26 MBps) Copying: 324/1024 [MB] (27 MBps) Copying: 352/1024 [MB] (28 MBps) Copying: 382/1024 [MB] (29 MBps) Copying: 408/1024 [MB] (26 MBps) Copying: 437/1024 [MB] (29 MBps) Copying: 466/1024 [MB] (28 MBps) Copying: 493/1024 [MB] (27 MBps) Copying: 521/1024 [MB] (27 MBps) Copying: 548/1024 [MB] (27 MBps) Copying: 576/1024 [MB] (27 MBps) Copying: 603/1024 [MB] (26 MBps) Copying: 629/1024 [MB] (26 MBps) Copying: 656/1024 [MB] (26 MBps) Copying: 683/1024 [MB] (26 MBps) Copying: 710/1024 [MB] (27 MBps) Copying: 737/1024 [MB] (26 MBps) Copying: 765/1024 [MB] (27 MBps) Copying: 792/1024 [MB] (26 MBps) Copying: 819/1024 [MB] (26 MBps) Copying: 844/1024 [MB] (24 MBps) Copying: 870/1024 [MB] (26 MBps) Copying: 898/1024 [MB] (27 MBps) Copying: 925/1024 [MB] (27 MBps) Copying: 952/1024 [MB] (26 MBps) Copying: 977/1024 [MB] (25 MBps) Copying: 1005/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-25 12:00:40.417912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.469 [2024-07-25 12:00:40.417992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:43.469 [2024-07-25 12:00:40.418015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:43.469 [2024-07-25 12:00:40.418027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.469 [2024-07-25 12:00:40.418058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:43.469 [2024-07-25 12:00:40.421936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.469 [2024-07-25 12:00:40.421975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:43.469 [2024-07-25 12:00:40.421991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.855 ms 00:31:43.469 [2024-07-25 12:00:40.422009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.469 [2024-07-25 12:00:40.422249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.469 [2024-07-25 12:00:40.422273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:43.469 [2024-07-25 12:00:40.422286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:31:43.469 [2024-07-25 12:00:40.422298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.469 [2024-07-25 12:00:40.426287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.469 [2024-07-25 12:00:40.426320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:43.469 [2024-07-25 12:00:40.426335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.970 ms 00:31:43.469 [2024-07-25 12:00:40.426346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.469 [2024-07-25 12:00:40.433153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.469 [2024-07-25 12:00:40.433632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:43.469 [2024-07-25 12:00:40.433648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.779 ms 00:31:43.469 [2024-07-25 12:00:40.433659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.469 [2024-07-25 12:00:40.466523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.469 [2024-07-25 12:00:40.466615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:43.469 [2024-07-25 12:00:40.466644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.765 ms 00:31:43.469 [2024-07-25 12:00:40.466655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.469 [2024-07-25 12:00:40.486402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.469 [2024-07-25 12:00:40.486471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:43.470 [2024-07-25 12:00:40.486491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.635 ms 00:31:43.470 [2024-07-25 12:00:40.486503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.470 [2024-07-25 12:00:40.486682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.470 [2024-07-25 12:00:40.486725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:43.470 [2024-07-25 12:00:40.486745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:31:43.470 [2024-07-25 12:00:40.486757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.729 [2024-07-25 12:00:40.519125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.729 [2024-07-25 12:00:40.519178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:43.729 [2024-07-25 12:00:40.519197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.345 ms 00:31:43.729 [2024-07-25 12:00:40.519208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.729 [2024-07-25 12:00:40.550805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.729 [2024-07-25 12:00:40.550852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:43.729 [2024-07-25 12:00:40.550870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.548 ms 00:31:43.729 [2024-07-25 12:00:40.550881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.729 [2024-07-25 12:00:40.581593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.729 [2024-07-25 12:00:40.581638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:43.729 [2024-07-25 12:00:40.581670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.667 ms 00:31:43.729 [2024-07-25 12:00:40.581682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.729 [2024-07-25 12:00:40.612361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.729 [2024-07-25 12:00:40.612405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:43.729 [2024-07-25 12:00:40.612422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.568 ms 00:31:43.729 [2024-07-25 12:00:40.612433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.729 [2024-07-25 12:00:40.612477] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:43.729 [2024-07-25 12:00:40.612503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.612996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.613007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.613019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.613031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.613042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.613053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:43.729 [2024-07-25 12:00:40.613065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:43.730 [2024-07-25 12:00:40.613724] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:43.730 [2024-07-25 12:00:40.613735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a343668e-77a8-43d1-8f09-006b66036355 00:31:43.730 [2024-07-25 12:00:40.613754] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:43.730 [2024-07-25 12:00:40.613765] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:43.730 [2024-07-25 12:00:40.613776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:43.730 [2024-07-25 12:00:40.613788] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:43.730 [2024-07-25 12:00:40.613798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:43.730 [2024-07-25 12:00:40.613809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:43.730 [2024-07-25 12:00:40.613820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:43.730 [2024-07-25 12:00:40.613830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:43.730 [2024-07-25 12:00:40.613840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:43.730 [2024-07-25 12:00:40.613851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.730 [2024-07-25 12:00:40.613863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:43.730 [2024-07-25 12:00:40.613879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:31:43.730 [2024-07-25 12:00:40.613890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.730 [2024-07-25 12:00:40.630829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.730 [2024-07-25 12:00:40.630899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:43.730 [2024-07-25 12:00:40.630939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.888 ms 00:31:43.730 [2024-07-25 12:00:40.630951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.730 [2024-07-25 12:00:40.631406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.730 [2024-07-25 12:00:40.631429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:43.730 [2024-07-25 12:00:40.631442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:31:43.730 [2024-07-25 12:00:40.631454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.730 [2024-07-25 12:00:40.668416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.730 [2024-07-25 12:00:40.668472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.730 [2024-07-25 12:00:40.668489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.730 [2024-07-25 12:00:40.668501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.730 [2024-07-25 12:00:40.668575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.730 [2024-07-25 12:00:40.668591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.731 [2024-07-25 12:00:40.668603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.731 [2024-07-25 12:00:40.668614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.731 [2024-07-25 12:00:40.668731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.731 [2024-07-25 12:00:40.668752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.731 [2024-07-25 12:00:40.668765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.731 [2024-07-25 12:00:40.668776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.731 [2024-07-25 12:00:40.668799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.731 [2024-07-25 12:00:40.668813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.731 [2024-07-25 12:00:40.668825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.731 [2024-07-25 12:00:40.668836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.767013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.767071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.989 [2024-07-25 12:00:40.767089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.767101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.850624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.850725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.989 [2024-07-25 12:00:40.850745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.850757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.850876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.850895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.989 [2024-07-25 12:00:40.850907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.850918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.850967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.850982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.989 [2024-07-25 12:00:40.850994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.851005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.851129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.851155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.989 [2024-07-25 12:00:40.851168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.851179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.851229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.851248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:43.989 [2024-07-25 12:00:40.851260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.851271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.851317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.851341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.989 [2024-07-25 12:00:40.851353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.851363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.851417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.989 [2024-07-25 12:00:40.851434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.989 [2024-07-25 12:00:40.851446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.989 [2024-07-25 12:00:40.851457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.989 [2024-07-25 12:00:40.851603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 433.656 ms, result 0 00:31:44.921 00:31:44.921 00:31:44.921 12:00:41 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:47.445 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:47.445 12:00:44 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:47.445 [2024-07-25 12:00:44.221425] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:47.445 [2024-07-25 12:00:44.221587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81261 ] 00:31:47.445 [2024-07-25 12:00:44.384173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.701 [2024-07-25 12:00:44.569865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.957 [2024-07-25 12:00:44.878337] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:47.957 [2024-07-25 12:00:44.878416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:48.215 [2024-07-25 12:00:45.038235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.215 [2024-07-25 12:00:45.038302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:48.215 [2024-07-25 12:00:45.038324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:48.215 [2024-07-25 12:00:45.038336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.215 [2024-07-25 12:00:45.038403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.215 [2024-07-25 12:00:45.038423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:48.215 [2024-07-25 12:00:45.038436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:48.215 [2024-07-25 12:00:45.038451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.215 [2024-07-25 12:00:45.038487] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:48.215 [2024-07-25 12:00:45.039456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:48.215 [2024-07-25 12:00:45.039506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.215 [2024-07-25 12:00:45.039522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:48.215 [2024-07-25 12:00:45.039535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:31:48.215 [2024-07-25 12:00:45.039546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.215 [2024-07-25 12:00:45.040679] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:48.215 [2024-07-25 12:00:45.056906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.215 [2024-07-25 12:00:45.056953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:48.215 [2024-07-25 12:00:45.056972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.228 ms 00:31:48.215 [2024-07-25 12:00:45.056983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.215 [2024-07-25 12:00:45.057059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.215 [2024-07-25 12:00:45.057083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:48.215 [2024-07-25 12:00:45.057097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:48.215 [2024-07-25 12:00:45.057108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.215 [2024-07-25 12:00:45.061600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.215 [2024-07-25 12:00:45.061647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:48.215 [2024-07-25 12:00:45.061664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.397 ms 00:31:48.215 [2024-07-25 12:00:45.061676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.215 [2024-07-25 12:00:45.061809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.216 [2024-07-25 12:00:45.061831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:48.216 [2024-07-25 12:00:45.061844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:31:48.216 [2024-07-25 12:00:45.061856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.216 [2024-07-25 12:00:45.061924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.216 [2024-07-25 12:00:45.061943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:48.216 [2024-07-25 12:00:45.061955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:48.216 [2024-07-25 12:00:45.061966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.216 [2024-07-25 12:00:45.062002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:48.216 [2024-07-25 12:00:45.066274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.216 [2024-07-25 12:00:45.066314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:48.216 [2024-07-25 12:00:45.066330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:31:48.216 [2024-07-25 12:00:45.066342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.216 [2024-07-25 12:00:45.066388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.216 [2024-07-25 12:00:45.066404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:48.216 [2024-07-25 12:00:45.066417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:48.216 [2024-07-25 12:00:45.066428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.216 [2024-07-25 12:00:45.066475] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:48.216 [2024-07-25 12:00:45.066507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:48.216 [2024-07-25 12:00:45.066562] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:48.216 [2024-07-25 12:00:45.066590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:48.216 [2024-07-25 12:00:45.066714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:48.216 [2024-07-25 12:00:45.066733] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:48.216 [2024-07-25 12:00:45.066749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:48.216 [2024-07-25 12:00:45.066763] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:48.216 [2024-07-25 12:00:45.066777] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:48.216 [2024-07-25 12:00:45.066790] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:48.216 [2024-07-25 12:00:45.066801] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:48.216 [2024-07-25 12:00:45.066812] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:48.216 [2024-07-25 12:00:45.066823] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:48.216 [2024-07-25 12:00:45.066835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.216 [2024-07-25 12:00:45.066852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:48.216 [2024-07-25 12:00:45.066865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:31:48.216 [2024-07-25 12:00:45.066876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.216 [2024-07-25 12:00:45.066969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.216 [2024-07-25 12:00:45.066985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:48.216 [2024-07-25 12:00:45.066997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:48.216 [2024-07-25 12:00:45.067008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.216 [2024-07-25 12:00:45.067143] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:48.216 [2024-07-25 12:00:45.067163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:48.216 [2024-07-25 12:00:45.067182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:48.216 [2024-07-25 12:00:45.067217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:48.216 [2024-07-25 12:00:45.067251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:48.216 [2024-07-25 12:00:45.067272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:48.216 [2024-07-25 12:00:45.067282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:48.216 [2024-07-25 12:00:45.067292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:48.216 [2024-07-25 12:00:45.067305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:48.216 [2024-07-25 12:00:45.067316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:48.216 [2024-07-25 12:00:45.067326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:48.216 [2024-07-25 12:00:45.067348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:48.216 [2024-07-25 12:00:45.067394] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:48.216 [2024-07-25 12:00:45.067427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:48.216 [2024-07-25 12:00:45.067458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:48.216 [2024-07-25 12:00:45.067489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:48.216 [2024-07-25 12:00:45.067520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:48.216 [2024-07-25 12:00:45.067541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:48.216 [2024-07-25 12:00:45.067551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:48.216 [2024-07-25 12:00:45.067562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:48.216 [2024-07-25 12:00:45.067572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:48.216 [2024-07-25 12:00:45.067582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:48.216 [2024-07-25 12:00:45.067593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:48.216 [2024-07-25 12:00:45.067613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:48.216 [2024-07-25 12:00:45.067624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067635] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:48.216 [2024-07-25 12:00:45.067646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:48.216 [2024-07-25 12:00:45.067657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067668] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.216 [2024-07-25 12:00:45.067679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:48.216 [2024-07-25 12:00:45.067706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:48.216 [2024-07-25 12:00:45.067720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:48.216 [2024-07-25 12:00:45.067731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:48.216 [2024-07-25 12:00:45.067741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:48.216 [2024-07-25 12:00:45.067752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:48.216 [2024-07-25 12:00:45.067764] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:48.216 [2024-07-25 12:00:45.067778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:48.216 [2024-07-25 12:00:45.067791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:48.216 [2024-07-25 12:00:45.067803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:48.216 [2024-07-25 12:00:45.067814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:48.216 [2024-07-25 12:00:45.067826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:48.216 [2024-07-25 12:00:45.067837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:48.216 [2024-07-25 12:00:45.067849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:48.216 [2024-07-25 12:00:45.067860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:48.216 [2024-07-25 12:00:45.067871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:48.217 [2024-07-25 12:00:45.067883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:48.217 [2024-07-25 12:00:45.067894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:48.217 [2024-07-25 12:00:45.067906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:48.217 [2024-07-25 12:00:45.067917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:48.217 [2024-07-25 12:00:45.067928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:48.217 [2024-07-25 12:00:45.067940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:48.217 [2024-07-25 12:00:45.067952] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:48.217 [2024-07-25 12:00:45.067964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:48.217 [2024-07-25 12:00:45.067983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:48.217 [2024-07-25 12:00:45.067995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:48.217 [2024-07-25 12:00:45.068006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:48.217 [2024-07-25 12:00:45.068018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:48.217 [2024-07-25 12:00:45.068030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.068042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:48.217 [2024-07-25 12:00:45.068055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:31:48.217 [2024-07-25 12:00:45.068066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.112223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.112299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:48.217 [2024-07-25 12:00:45.112322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.093 ms 00:31:48.217 [2024-07-25 12:00:45.112334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.112463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.112482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:48.217 [2024-07-25 12:00:45.112496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:48.217 [2024-07-25 12:00:45.112507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.151020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.151086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:48.217 [2024-07-25 12:00:45.151108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.406 ms 00:31:48.217 [2024-07-25 12:00:45.151120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.151194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.151212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:48.217 [2024-07-25 12:00:45.151225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:48.217 [2024-07-25 12:00:45.151243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.151642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.151662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:48.217 [2024-07-25 12:00:45.151676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:31:48.217 [2024-07-25 12:00:45.151687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.151870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.151891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:48.217 [2024-07-25 12:00:45.151904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:31:48.217 [2024-07-25 12:00:45.151915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.167993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.168038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:48.217 [2024-07-25 12:00:45.168057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.046 ms 00:31:48.217 [2024-07-25 12:00:45.168073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.184470] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:48.217 [2024-07-25 12:00:45.184517] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:48.217 [2024-07-25 12:00:45.184537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.184549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:48.217 [2024-07-25 12:00:45.184562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.326 ms 00:31:48.217 [2024-07-25 12:00:45.184573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.214350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.214403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:48.217 [2024-07-25 12:00:45.214421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.728 ms 00:31:48.217 [2024-07-25 12:00:45.214433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.230086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.230130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:48.217 [2024-07-25 12:00:45.230147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.621 ms 00:31:48.217 [2024-07-25 12:00:45.230158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.245620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.245663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:48.217 [2024-07-25 12:00:45.245679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.416 ms 00:31:48.217 [2024-07-25 12:00:45.245711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.217 [2024-07-25 12:00:45.246520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.217 [2024-07-25 12:00:45.246568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:48.217 [2024-07-25 12:00:45.246585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:31:48.217 [2024-07-25 12:00:45.246596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.326919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.326995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:48.475 [2024-07-25 12:00:45.327017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.291 ms 00:31:48.475 [2024-07-25 12:00:45.327037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.339757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:48.475 [2024-07-25 12:00:45.342411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.342450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:48.475 [2024-07-25 12:00:45.342470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.300 ms 00:31:48.475 [2024-07-25 12:00:45.342482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.342609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.342632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:48.475 [2024-07-25 12:00:45.342645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:48.475 [2024-07-25 12:00:45.342657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.342779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.342800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:48.475 [2024-07-25 12:00:45.342814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:48.475 [2024-07-25 12:00:45.342825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.342858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.342874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:48.475 [2024-07-25 12:00:45.342886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:48.475 [2024-07-25 12:00:45.342897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.342937] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:48.475 [2024-07-25 12:00:45.342957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.342973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:48.475 [2024-07-25 12:00:45.342986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:48.475 [2024-07-25 12:00:45.342996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.374350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.374421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:48.475 [2024-07-25 12:00:45.374442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.328 ms 00:31:48.475 [2024-07-25 12:00:45.374462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.374562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.475 [2024-07-25 12:00:45.374583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:48.475 [2024-07-25 12:00:45.374596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:48.475 [2024-07-25 12:00:45.374608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.475 [2024-07-25 12:00:45.375940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.164 ms, result 0 00:32:29.287  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (25 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 104/1024 [MB] (25 MBps) Copying: 128/1024 [MB] (24 MBps) Copying: 153/1024 [MB] (24 MBps) Copying: 177/1024 [MB] (24 MBps) Copying: 202/1024 [MB] (24 MBps) Copying: 228/1024 [MB] (25 MBps) Copying: 255/1024 [MB] (26 MBps) Copying: 280/1024 [MB] (25 MBps) Copying: 305/1024 [MB] (25 MBps) Copying: 332/1024 [MB] (26 MBps) Copying: 358/1024 [MB] (26 MBps) Copying: 385/1024 [MB] (26 MBps) Copying: 411/1024 [MB] (25 MBps) Copying: 437/1024 [MB] (26 MBps) Copying: 463/1024 [MB] (25 MBps) Copying: 487/1024 [MB] (24 MBps) Copying: 512/1024 [MB] (24 MBps) Copying: 537/1024 [MB] (25 MBps) Copying: 562/1024 [MB] (24 MBps) Copying: 586/1024 [MB] (24 MBps) Copying: 610/1024 [MB] (24 MBps) Copying: 635/1024 [MB] (24 MBps) Copying: 661/1024 [MB] (25 MBps) Copying: 686/1024 [MB] (25 MBps) Copying: 714/1024 [MB] (27 MBps) Copying: 740/1024 [MB] (26 MBps) Copying: 767/1024 [MB] (27 MBps) Copying: 796/1024 [MB] (28 MBps) Copying: 825/1024 [MB] (29 MBps) Copying: 854/1024 [MB] (28 MBps) Copying: 881/1024 [MB] (27 MBps) Copying: 907/1024 [MB] (25 MBps) Copying: 933/1024 [MB] (26 MBps) Copying: 959/1024 [MB] (25 MBps) Copying: 984/1024 [MB] (24 MBps) Copying: 1008/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-25 12:01:26.095971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.096066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:29.287 [2024-07-25 12:01:26.096092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:29.287 [2024-07-25 12:01:26.096105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.287 [2024-07-25 12:01:26.098430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:29.287 [2024-07-25 12:01:26.105463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.105508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:29.287 [2024-07-25 12:01:26.105526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:32:29.287 [2024-07-25 12:01:26.105538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.287 [2024-07-25 12:01:26.118572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.118626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:29.287 [2024-07-25 12:01:26.118645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.661 ms 00:32:29.287 [2024-07-25 12:01:26.118657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.287 [2024-07-25 12:01:26.140578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.140648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:29.287 [2024-07-25 12:01:26.140669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.887 ms 00:32:29.287 [2024-07-25 12:01:26.140681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.287 [2024-07-25 12:01:26.147352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.147407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:29.287 [2024-07-25 12:01:26.147440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.610 ms 00:32:29.287 [2024-07-25 12:01:26.147467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.287 [2024-07-25 12:01:26.178689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.178750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:29.287 [2024-07-25 12:01:26.178768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.164 ms 00:32:29.287 [2024-07-25 12:01:26.178780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.287 [2024-07-25 12:01:26.196365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.196419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:29.287 [2024-07-25 12:01:26.196467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.537 ms 00:32:29.287 [2024-07-25 12:01:26.196479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.287 [2024-07-25 12:01:26.292827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.287 [2024-07-25 12:01:26.292934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:29.287 [2024-07-25 12:01:26.292958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.293 ms 00:32:29.287 [2024-07-25 12:01:26.292970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.545 [2024-07-25 12:01:26.325447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.545 [2024-07-25 12:01:26.325524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:32:29.545 [2024-07-25 12:01:26.325557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.453 ms 00:32:29.545 [2024-07-25 12:01:26.325569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.545 [2024-07-25 12:01:26.357210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.545 [2024-07-25 12:01:26.357255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:32:29.545 [2024-07-25 12:01:26.357273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.595 ms 00:32:29.545 [2024-07-25 12:01:26.357284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.545 [2024-07-25 12:01:26.387963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.545 [2024-07-25 12:01:26.388005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:29.545 [2024-07-25 12:01:26.388053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.633 ms 00:32:29.545 [2024-07-25 12:01:26.388064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.545 [2024-07-25 12:01:26.419139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.545 [2024-07-25 12:01:26.419201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:29.545 [2024-07-25 12:01:26.419236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.981 ms 00:32:29.545 [2024-07-25 12:01:26.419247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.545 [2024-07-25 12:01:26.419296] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:29.545 [2024-07-25 12:01:26.419322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115968 / 261120 wr_cnt: 1 state: open 00:32:29.545 [2024-07-25 12:01:26.419337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:29.545 [2024-07-25 12:01:26.419945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.419957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.419969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.419980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.419992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:29.546 [2024-07-25 12:01:26.420730] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:29.546 [2024-07-25 12:01:26.420742] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a343668e-77a8-43d1-8f09-006b66036355 00:32:29.546 [2024-07-25 12:01:26.420754] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115968 00:32:29.546 [2024-07-25 12:01:26.420765] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116928 00:32:29.546 [2024-07-25 12:01:26.420776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115968 00:32:29.546 [2024-07-25 12:01:26.420796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:32:29.546 [2024-07-25 12:01:26.420807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:29.546 [2024-07-25 12:01:26.420819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:29.546 [2024-07-25 12:01:26.420843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:29.546 [2024-07-25 12:01:26.420862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:29.546 [2024-07-25 12:01:26.420882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:29.546 [2024-07-25 12:01:26.420903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.546 [2024-07-25 12:01:26.420917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:29.546 [2024-07-25 12:01:26.420930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:32:29.546 [2024-07-25 12:01:26.420941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.546 [2024-07-25 12:01:26.437466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.546 [2024-07-25 12:01:26.437507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:29.546 [2024-07-25 12:01:26.437555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.477 ms 00:32:29.546 [2024-07-25 12:01:26.437566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.546 [2024-07-25 12:01:26.438063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.546 [2024-07-25 12:01:26.438110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:29.546 [2024-07-25 12:01:26.438127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:32:29.546 [2024-07-25 12:01:26.438139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.546 [2024-07-25 12:01:26.474470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.546 [2024-07-25 12:01:26.474540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:29.546 [2024-07-25 12:01:26.474565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.546 [2024-07-25 12:01:26.474577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.546 [2024-07-25 12:01:26.474654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.546 [2024-07-25 12:01:26.474670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:29.546 [2024-07-25 12:01:26.474683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.546 [2024-07-25 12:01:26.474709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.546 [2024-07-25 12:01:26.474802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.546 [2024-07-25 12:01:26.474822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:29.546 [2024-07-25 12:01:26.474836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.546 [2024-07-25 12:01:26.474853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.546 [2024-07-25 12:01:26.474876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.546 [2024-07-25 12:01:26.474890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:29.546 [2024-07-25 12:01:26.474902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.546 [2024-07-25 12:01:26.474913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.546 [2024-07-25 12:01:26.575007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.546 [2024-07-25 12:01:26.575066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:29.546 [2024-07-25 12:01:26.575085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.546 [2024-07-25 12:01:26.575107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.659966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.805 [2024-07-25 12:01:26.660052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:29.805 [2024-07-25 12:01:26.660073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.805 [2024-07-25 12:01:26.660085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.660185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.805 [2024-07-25 12:01:26.660203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:29.805 [2024-07-25 12:01:26.660215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.805 [2024-07-25 12:01:26.660227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.660284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.805 [2024-07-25 12:01:26.660300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:29.805 [2024-07-25 12:01:26.660312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.805 [2024-07-25 12:01:26.660323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.660442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.805 [2024-07-25 12:01:26.660462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:29.805 [2024-07-25 12:01:26.660475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.805 [2024-07-25 12:01:26.660486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.660533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.805 [2024-07-25 12:01:26.660557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:29.805 [2024-07-25 12:01:26.660569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.805 [2024-07-25 12:01:26.660580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.660624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.805 [2024-07-25 12:01:26.660640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:29.805 [2024-07-25 12:01:26.660653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.805 [2024-07-25 12:01:26.660664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.660745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:29.805 [2024-07-25 12:01:26.660766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:29.805 [2024-07-25 12:01:26.660779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:29.805 [2024-07-25 12:01:26.660790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.805 [2024-07-25 12:01:26.660929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 566.028 ms, result 0 00:32:31.703 00:32:31.703 00:32:31.703 12:01:28 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:31.703 [2024-07-25 12:01:28.518167] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:31.703 [2024-07-25 12:01:28.518318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81697 ] 00:32:31.703 [2024-07-25 12:01:28.683140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.961 [2024-07-25 12:01:28.919716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.220 [2024-07-25 12:01:29.238655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:32.220 [2024-07-25 12:01:29.238759] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:32.480 [2024-07-25 12:01:29.398999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.399072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:32.480 [2024-07-25 12:01:29.399093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:32.480 [2024-07-25 12:01:29.399106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.399179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.399199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:32.480 [2024-07-25 12:01:29.399212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:32.480 [2024-07-25 12:01:29.399227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.399263] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:32.480 [2024-07-25 12:01:29.400250] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:32.480 [2024-07-25 12:01:29.400295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.400311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:32.480 [2024-07-25 12:01:29.400323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:32:32.480 [2024-07-25 12:01:29.400334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.401510] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:32.480 [2024-07-25 12:01:29.417890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.417935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:32.480 [2024-07-25 12:01:29.417953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:32:32.480 [2024-07-25 12:01:29.417965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.418038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.418061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:32.480 [2024-07-25 12:01:29.418074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:32.480 [2024-07-25 12:01:29.418085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.422530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.422574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:32.480 [2024-07-25 12:01:29.422590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:32:32.480 [2024-07-25 12:01:29.422602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.422724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.422746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:32.480 [2024-07-25 12:01:29.422760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:32.480 [2024-07-25 12:01:29.422771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.422839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.422858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:32.480 [2024-07-25 12:01:29.422878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:32.480 [2024-07-25 12:01:29.422889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.422924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:32.480 [2024-07-25 12:01:29.427186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.427232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:32.480 [2024-07-25 12:01:29.427248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.271 ms 00:32:32.480 [2024-07-25 12:01:29.427260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.427312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.427329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:32.480 [2024-07-25 12:01:29.427342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:32.480 [2024-07-25 12:01:29.427353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.427408] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:32.480 [2024-07-25 12:01:29.427440] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:32.480 [2024-07-25 12:01:29.427485] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:32.480 [2024-07-25 12:01:29.427510] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:32.480 [2024-07-25 12:01:29.427618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:32.480 [2024-07-25 12:01:29.427633] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:32.480 [2024-07-25 12:01:29.427648] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:32.480 [2024-07-25 12:01:29.427663] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:32.480 [2024-07-25 12:01:29.427677] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:32.480 [2024-07-25 12:01:29.427709] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:32.480 [2024-07-25 12:01:29.427724] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:32.480 [2024-07-25 12:01:29.427735] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:32.480 [2024-07-25 12:01:29.427746] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:32.480 [2024-07-25 12:01:29.427758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.427774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:32.480 [2024-07-25 12:01:29.427786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:32:32.480 [2024-07-25 12:01:29.427797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.427895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.480 [2024-07-25 12:01:29.427920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:32.480 [2024-07-25 12:01:29.427932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:32.480 [2024-07-25 12:01:29.427943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.480 [2024-07-25 12:01:29.428053] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:32.480 [2024-07-25 12:01:29.428071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:32.480 [2024-07-25 12:01:29.428089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:32.480 [2024-07-25 12:01:29.428101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:32.480 [2024-07-25 12:01:29.428122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:32.480 [2024-07-25 12:01:29.428143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:32.480 [2024-07-25 12:01:29.428154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:32.480 [2024-07-25 12:01:29.428174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:32.480 [2024-07-25 12:01:29.428184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:32.480 [2024-07-25 12:01:29.428194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:32.480 [2024-07-25 12:01:29.428205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:32.480 [2024-07-25 12:01:29.428216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:32.480 [2024-07-25 12:01:29.428226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:32.480 [2024-07-25 12:01:29.428247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:32.480 [2024-07-25 12:01:29.428257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:32.480 [2024-07-25 12:01:29.428295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.480 [2024-07-25 12:01:29.428316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:32.480 [2024-07-25 12:01:29.428326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.480 [2024-07-25 12:01:29.428348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:32.480 [2024-07-25 12:01:29.428359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.480 [2024-07-25 12:01:29.428379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:32.480 [2024-07-25 12:01:29.428389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.480 [2024-07-25 12:01:29.428409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:32.480 [2024-07-25 12:01:29.428419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:32.480 [2024-07-25 12:01:29.428430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:32.481 [2024-07-25 12:01:29.428439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:32.481 [2024-07-25 12:01:29.428450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:32.481 [2024-07-25 12:01:29.428460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:32.481 [2024-07-25 12:01:29.428470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:32.481 [2024-07-25 12:01:29.428480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:32.481 [2024-07-25 12:01:29.428493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.481 [2024-07-25 12:01:29.428503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:32.481 [2024-07-25 12:01:29.428513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:32.481 [2024-07-25 12:01:29.428523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.481 [2024-07-25 12:01:29.428532] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:32.481 [2024-07-25 12:01:29.428543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:32.481 [2024-07-25 12:01:29.428554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:32.481 [2024-07-25 12:01:29.428564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.481 [2024-07-25 12:01:29.428578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:32.481 [2024-07-25 12:01:29.428589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:32.481 [2024-07-25 12:01:29.428600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:32.481 [2024-07-25 12:01:29.428610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:32.481 [2024-07-25 12:01:29.428620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:32.481 [2024-07-25 12:01:29.428630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:32.481 [2024-07-25 12:01:29.428642] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:32.481 [2024-07-25 12:01:29.428656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:32.481 [2024-07-25 12:01:29.428668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:32.481 [2024-07-25 12:01:29.428679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:32.481 [2024-07-25 12:01:29.428705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:32.481 [2024-07-25 12:01:29.428719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:32.481 [2024-07-25 12:01:29.428730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:32.481 [2024-07-25 12:01:29.428741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:32.481 [2024-07-25 12:01:29.428752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:32.481 [2024-07-25 12:01:29.428764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:32.481 [2024-07-25 12:01:29.428775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:32.481 [2024-07-25 12:01:29.428786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:32.481 [2024-07-25 12:01:29.428797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:32.481 [2024-07-25 12:01:29.428808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:32.481 [2024-07-25 12:01:29.428819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:32.481 [2024-07-25 12:01:29.428831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:32.481 [2024-07-25 12:01:29.428842] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:32.481 [2024-07-25 12:01:29.428854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:32.481 [2024-07-25 12:01:29.428871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:32.481 [2024-07-25 12:01:29.428882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:32.481 [2024-07-25 12:01:29.428893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:32.481 [2024-07-25 12:01:29.428904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:32.481 [2024-07-25 12:01:29.428916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.481 [2024-07-25 12:01:29.428927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:32.481 [2024-07-25 12:01:29.428939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:32:32.481 [2024-07-25 12:01:29.428950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.481 [2024-07-25 12:01:29.470559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.481 [2024-07-25 12:01:29.470633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:32.481 [2024-07-25 12:01:29.470656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.517 ms 00:32:32.481 [2024-07-25 12:01:29.470668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.481 [2024-07-25 12:01:29.470820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.481 [2024-07-25 12:01:29.470840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:32.481 [2024-07-25 12:01:29.470854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:32.481 [2024-07-25 12:01:29.470865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.481 [2024-07-25 12:01:29.509040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.481 [2024-07-25 12:01:29.509099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:32.481 [2024-07-25 12:01:29.509119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.075 ms 00:32:32.481 [2024-07-25 12:01:29.509131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.481 [2024-07-25 12:01:29.509194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.481 [2024-07-25 12:01:29.509212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:32.481 [2024-07-25 12:01:29.509226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:32.481 [2024-07-25 12:01:29.509243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.481 [2024-07-25 12:01:29.509644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.481 [2024-07-25 12:01:29.509663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:32.481 [2024-07-25 12:01:29.509677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:32:32.481 [2024-07-25 12:01:29.509688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.481 [2024-07-25 12:01:29.509888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.481 [2024-07-25 12:01:29.509908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:32.481 [2024-07-25 12:01:29.509921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:32:32.481 [2024-07-25 12:01:29.509932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.525984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.526031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:32.740 [2024-07-25 12:01:29.526049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.020 ms 00:32:32.740 [2024-07-25 12:01:29.526065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.542403] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:32.740 [2024-07-25 12:01:29.542450] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:32.740 [2024-07-25 12:01:29.542470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.542482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:32.740 [2024-07-25 12:01:29.542496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.266 ms 00:32:32.740 [2024-07-25 12:01:29.542515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.572327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.572379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:32.740 [2024-07-25 12:01:29.572398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.760 ms 00:32:32.740 [2024-07-25 12:01:29.572410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.588144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.588189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:32.740 [2024-07-25 12:01:29.588206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.677 ms 00:32:32.740 [2024-07-25 12:01:29.588218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.603764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.603807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:32.740 [2024-07-25 12:01:29.603824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.499 ms 00:32:32.740 [2024-07-25 12:01:29.603835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.604654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.604712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:32.740 [2024-07-25 12:01:29.604731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:32:32.740 [2024-07-25 12:01:29.604742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.677204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.677276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:32.740 [2024-07-25 12:01:29.677297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.429 ms 00:32:32.740 [2024-07-25 12:01:29.677317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.689884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:32.740 [2024-07-25 12:01:29.692552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.692591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:32.740 [2024-07-25 12:01:29.692609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.156 ms 00:32:32.740 [2024-07-25 12:01:29.692621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.692768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.692791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:32.740 [2024-07-25 12:01:29.692806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:32.740 [2024-07-25 12:01:29.692817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.694338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.694375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:32.740 [2024-07-25 12:01:29.694390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:32:32.740 [2024-07-25 12:01:29.694401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.694439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.694455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:32.740 [2024-07-25 12:01:29.694468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:32.740 [2024-07-25 12:01:29.694479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.694531] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:32.740 [2024-07-25 12:01:29.694550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.740 [2024-07-25 12:01:29.694565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:32.740 [2024-07-25 12:01:29.694577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:32.740 [2024-07-25 12:01:29.694588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.740 [2024-07-25 12:01:29.725646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.741 [2024-07-25 12:01:29.725718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:32.741 [2024-07-25 12:01:29.725737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.033 ms 00:32:32.741 [2024-07-25 12:01:29.725768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.741 [2024-07-25 12:01:29.725855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.741 [2024-07-25 12:01:29.725876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:32.741 [2024-07-25 12:01:29.725889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:32.741 [2024-07-25 12:01:29.725901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.741 [2024-07-25 12:01:29.732998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.404 ms, result 0 00:33:13.474  Copying: 24/1024 [MB] (24 MBps) Copying: 51/1024 [MB] (27 MBps) Copying: 75/1024 [MB] (24 MBps) Copying: 100/1024 [MB] (25 MBps) Copying: 126/1024 [MB] (26 MBps) Copying: 152/1024 [MB] (25 MBps) Copying: 178/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (27 MBps) Copying: 232/1024 [MB] (27 MBps) Copying: 258/1024 [MB] (25 MBps) Copying: 283/1024 [MB] (25 MBps) Copying: 308/1024 [MB] (25 MBps) Copying: 334/1024 [MB] (25 MBps) Copying: 359/1024 [MB] (25 MBps) Copying: 384/1024 [MB] (24 MBps) Copying: 410/1024 [MB] (26 MBps) Copying: 437/1024 [MB] (26 MBps) Copying: 463/1024 [MB] (26 MBps) Copying: 490/1024 [MB] (27 MBps) Copying: 516/1024 [MB] (25 MBps) Copying: 542/1024 [MB] (25 MBps) Copying: 567/1024 [MB] (25 MBps) Copying: 593/1024 [MB] (26 MBps) Copying: 619/1024 [MB] (25 MBps) Copying: 644/1024 [MB] (24 MBps) Copying: 667/1024 [MB] (23 MBps) Copying: 691/1024 [MB] (23 MBps) Copying: 714/1024 [MB] (23 MBps) Copying: 741/1024 [MB] (26 MBps) Copying: 768/1024 [MB] (26 MBps) Copying: 795/1024 [MB] (27 MBps) Copying: 820/1024 [MB] (25 MBps) Copying: 846/1024 [MB] (25 MBps) Copying: 871/1024 [MB] (25 MBps) Copying: 896/1024 [MB] (24 MBps) Copying: 917/1024 [MB] (20 MBps) Copying: 940/1024 [MB] (22 MBps) Copying: 967/1024 [MB] (27 MBps) Copying: 993/1024 [MB] (26 MBps) Copying: 1018/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-25 12:02:10.332366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.474 [2024-07-25 12:02:10.332460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:13.474 [2024-07-25 12:02:10.332494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:13.474 [2024-07-25 12:02:10.332513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.474 [2024-07-25 12:02:10.332580] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:13.474 [2024-07-25 12:02:10.337706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.474 [2024-07-25 12:02:10.337771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:13.474 [2024-07-25 12:02:10.337792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.090 ms 00:33:13.474 [2024-07-25 12:02:10.337808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.474 [2024-07-25 12:02:10.338104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.474 [2024-07-25 12:02:10.338134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:13.474 [2024-07-25 12:02:10.338150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:33:13.474 [2024-07-25 12:02:10.338163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.474 [2024-07-25 12:02:10.344058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.474 [2024-07-25 12:02:10.344125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:13.474 [2024-07-25 12:02:10.344154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.854 ms 00:33:13.474 [2024-07-25 12:02:10.344177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.474 [2024-07-25 12:02:10.353342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.474 [2024-07-25 12:02:10.353408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:13.474 [2024-07-25 12:02:10.353435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.107 ms 00:33:13.474 [2024-07-25 12:02:10.353456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.474 [2024-07-25 12:02:10.395617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.474 [2024-07-25 12:02:10.395688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:13.474 [2024-07-25 12:02:10.395740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.052 ms 00:33:13.474 [2024-07-25 12:02:10.395760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.474 [2024-07-25 12:02:10.418707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.474 [2024-07-25 12:02:10.418764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:13.474 [2024-07-25 12:02:10.418795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.873 ms 00:33:13.474 [2024-07-25 12:02:10.418810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.732 [2024-07-25 12:02:10.517994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.732 [2024-07-25 12:02:10.518090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:13.732 [2024-07-25 12:02:10.518122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.112 ms 00:33:13.732 [2024-07-25 12:02:10.518142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.732 [2024-07-25 12:02:10.561968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.732 [2024-07-25 12:02:10.562058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:33:13.732 [2024-07-25 12:02:10.562087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.788 ms 00:33:13.732 [2024-07-25 12:02:10.562107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.732 [2024-07-25 12:02:10.603354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.732 [2024-07-25 12:02:10.603424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:33:13.732 [2024-07-25 12:02:10.603455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.171 ms 00:33:13.732 [2024-07-25 12:02:10.603470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.732 [2024-07-25 12:02:10.642437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.732 [2024-07-25 12:02:10.642506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:13.732 [2024-07-25 12:02:10.642526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.893 ms 00:33:13.732 [2024-07-25 12:02:10.642555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.732 [2024-07-25 12:02:10.673614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.732 [2024-07-25 12:02:10.673665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:13.732 [2024-07-25 12:02:10.673684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.946 ms 00:33:13.732 [2024-07-25 12:02:10.673712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.732 [2024-07-25 12:02:10.673762] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:13.732 [2024-07-25 12:02:10.673787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:33:13.732 [2024-07-25 12:02:10.673802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:13.732 [2024-07-25 12:02:10.673933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.673945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.673957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.673969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.673980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.673992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.674991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.675002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.675017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.675029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:13.733 [2024-07-25 12:02:10.675049] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:13.733 [2024-07-25 12:02:10.675062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a343668e-77a8-43d1-8f09-006b66036355 00:33:13.733 [2024-07-25 12:02:10.675073] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:33:13.733 [2024-07-25 12:02:10.675084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18624 00:33:13.733 [2024-07-25 12:02:10.675094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17664 00:33:13.733 [2024-07-25 12:02:10.675114] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0543 00:33:13.733 [2024-07-25 12:02:10.675125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:13.733 [2024-07-25 12:02:10.675137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:13.733 [2024-07-25 12:02:10.675152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:13.733 [2024-07-25 12:02:10.675161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:13.733 [2024-07-25 12:02:10.675171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:13.733 [2024-07-25 12:02:10.675183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.733 [2024-07-25 12:02:10.675195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:13.733 [2024-07-25 12:02:10.675206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:33:13.733 [2024-07-25 12:02:10.675217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.733 [2024-07-25 12:02:10.691731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.733 [2024-07-25 12:02:10.691775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:13.733 [2024-07-25 12:02:10.691792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.470 ms 00:33:13.733 [2024-07-25 12:02:10.691819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.733 [2024-07-25 12:02:10.692254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.733 [2024-07-25 12:02:10.692276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:13.733 [2024-07-25 12:02:10.692291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:33:13.733 [2024-07-25 12:02:10.692312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.733 [2024-07-25 12:02:10.729483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.733 [2024-07-25 12:02:10.729566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:13.733 [2024-07-25 12:02:10.729590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.733 [2024-07-25 12:02:10.729603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.733 [2024-07-25 12:02:10.729707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.733 [2024-07-25 12:02:10.729727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:13.733 [2024-07-25 12:02:10.729740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.733 [2024-07-25 12:02:10.729751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.733 [2024-07-25 12:02:10.729863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.733 [2024-07-25 12:02:10.729883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:13.733 [2024-07-25 12:02:10.729896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.733 [2024-07-25 12:02:10.729914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.733 [2024-07-25 12:02:10.729938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.733 [2024-07-25 12:02:10.729953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:13.733 [2024-07-25 12:02:10.729965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.733 [2024-07-25 12:02:10.729976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.829538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.829622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:13.992 [2024-07-25 12:02:10.829658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.829678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.915923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.915995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:13.992 [2024-07-25 12:02:10.916015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.916027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.916145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.916166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:13.992 [2024-07-25 12:02:10.916179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.916190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.916247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.916265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:13.992 [2024-07-25 12:02:10.916278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.916289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.916410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.916431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:13.992 [2024-07-25 12:02:10.916445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.916456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.916511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.916536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:13.992 [2024-07-25 12:02:10.916548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.916559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.916604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.916621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:13.992 [2024-07-25 12:02:10.916633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.916643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.916729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:13.992 [2024-07-25 12:02:10.916751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:13.992 [2024-07-25 12:02:10.916763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:13.992 [2024-07-25 12:02:10.916775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.992 [2024-07-25 12:02:10.916915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 584.526 ms, result 0 00:33:15.367 00:33:15.367 00:33:15.367 12:02:11 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:17.265 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:17.265 12:02:14 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:17.265 12:02:14 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:33:17.265 12:02:14 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:17.523 12:02:14 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:17.523 12:02:14 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:17.523 12:02:14 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80179 00:33:17.523 12:02:14 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80179 ']' 00:33:17.523 12:02:14 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80179 00:33:17.524 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80179) - No such process 00:33:17.524 Process with pid 80179 is not found 00:33:17.524 12:02:14 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 80179 is not found' 00:33:17.524 Remove shared memory files 00:33:17.524 12:02:14 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:33:17.524 12:02:14 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:17.524 12:02:14 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:33:17.524 12:02:14 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:33:17.524 12:02:14 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:33:17.524 12:02:14 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:17.524 12:02:14 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:33:17.524 ************************************ 00:33:17.524 END TEST ftl_restore 00:33:17.524 ************************************ 00:33:17.524 00:33:17.524 real 3m14.630s 00:33:17.524 user 3m0.586s 00:33:17.524 sys 0m16.167s 00:33:17.524 12:02:14 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:17.524 12:02:14 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:33:17.524 12:02:14 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:17.524 12:02:14 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:17.524 12:02:14 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:17.524 12:02:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:17.524 ************************************ 00:33:17.524 START TEST ftl_dirty_shutdown 00:33:17.524 ************************************ 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:17.524 * Looking for test storage... 00:33:17.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:33:17.524 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:17.782 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82216 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82216 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82216 ']' 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.783 12:02:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:17.783 [2024-07-25 12:02:14.681216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:17.783 [2024-07-25 12:02:14.681597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82216 ] 00:33:18.041 [2024-07-25 12:02:14.855731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.041 [2024-07-25 12:02:15.068785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:33:18.975 12:02:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:19.233 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:19.492 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:19.492 { 00:33:19.492 "name": "nvme0n1", 00:33:19.492 "aliases": [ 00:33:19.492 "b740f134-fd5f-4515-a80f-f2834e9cddac" 00:33:19.492 ], 00:33:19.492 "product_name": "NVMe disk", 00:33:19.492 "block_size": 4096, 00:33:19.492 "num_blocks": 1310720, 00:33:19.492 "uuid": "b740f134-fd5f-4515-a80f-f2834e9cddac", 00:33:19.492 "assigned_rate_limits": { 00:33:19.492 "rw_ios_per_sec": 0, 00:33:19.492 "rw_mbytes_per_sec": 0, 00:33:19.492 "r_mbytes_per_sec": 0, 00:33:19.492 "w_mbytes_per_sec": 0 00:33:19.492 }, 00:33:19.492 "claimed": true, 00:33:19.492 "claim_type": "read_many_write_one", 00:33:19.492 "zoned": false, 00:33:19.492 "supported_io_types": { 00:33:19.492 "read": true, 00:33:19.492 "write": true, 00:33:19.492 "unmap": true, 00:33:19.492 "flush": true, 00:33:19.492 "reset": true, 00:33:19.492 "nvme_admin": true, 00:33:19.492 "nvme_io": true, 00:33:19.492 "nvme_io_md": false, 00:33:19.492 "write_zeroes": true, 00:33:19.492 "zcopy": false, 00:33:19.492 "get_zone_info": false, 00:33:19.492 "zone_management": false, 00:33:19.492 "zone_append": false, 00:33:19.492 "compare": true, 00:33:19.492 "compare_and_write": false, 00:33:19.492 "abort": true, 00:33:19.492 "seek_hole": false, 00:33:19.492 "seek_data": false, 00:33:19.492 "copy": true, 00:33:19.492 "nvme_iov_md": false 00:33:19.492 }, 00:33:19.492 "driver_specific": { 00:33:19.492 "nvme": [ 00:33:19.492 { 00:33:19.492 "pci_address": "0000:00:11.0", 00:33:19.492 "trid": { 00:33:19.492 "trtype": "PCIe", 00:33:19.492 "traddr": "0000:00:11.0" 00:33:19.492 }, 00:33:19.492 "ctrlr_data": { 00:33:19.492 "cntlid": 0, 00:33:19.492 "vendor_id": "0x1b36", 00:33:19.492 "model_number": "QEMU NVMe Ctrl", 00:33:19.492 "serial_number": "12341", 00:33:19.492 "firmware_revision": "8.0.0", 00:33:19.492 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:19.492 "oacs": { 00:33:19.492 "security": 0, 00:33:19.492 "format": 1, 00:33:19.492 "firmware": 0, 00:33:19.492 "ns_manage": 1 00:33:19.492 }, 00:33:19.492 "multi_ctrlr": false, 00:33:19.492 "ana_reporting": false 00:33:19.492 }, 00:33:19.492 "vs": { 00:33:19.492 "nvme_version": "1.4" 00:33:19.492 }, 00:33:19.492 "ns_data": { 00:33:19.492 "id": 1, 00:33:19.492 "can_share": false 00:33:19.492 } 00:33:19.492 } 00:33:19.492 ], 00:33:19.492 "mp_policy": "active_passive" 00:33:19.492 } 00:33:19.492 } 00:33:19.492 ]' 00:33:19.492 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:19.492 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:19.492 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:19.750 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:20.008 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=00ec056a-38ef-4cc9-bbcb-bb05c2148b43 00:33:20.008 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:33:20.008 12:02:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 00ec056a-38ef-4cc9-bbcb-bb05c2148b43 00:33:20.266 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:20.524 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0c36a6ed-73ba-427a-9593-23f206854dbb 00:33:20.524 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c36a6ed-73ba-427a-9593-23f206854dbb 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:20.784 { 00:33:20.784 "name": "b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3", 00:33:20.784 "aliases": [ 00:33:20.784 "lvs/nvme0n1p0" 00:33:20.784 ], 00:33:20.784 "product_name": "Logical Volume", 00:33:20.784 "block_size": 4096, 00:33:20.784 "num_blocks": 26476544, 00:33:20.784 "uuid": "b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3", 00:33:20.784 "assigned_rate_limits": { 00:33:20.784 "rw_ios_per_sec": 0, 00:33:20.784 "rw_mbytes_per_sec": 0, 00:33:20.784 "r_mbytes_per_sec": 0, 00:33:20.784 "w_mbytes_per_sec": 0 00:33:20.784 }, 00:33:20.784 "claimed": false, 00:33:20.784 "zoned": false, 00:33:20.784 "supported_io_types": { 00:33:20.784 "read": true, 00:33:20.784 "write": true, 00:33:20.784 "unmap": true, 00:33:20.784 "flush": false, 00:33:20.784 "reset": true, 00:33:20.784 "nvme_admin": false, 00:33:20.784 "nvme_io": false, 00:33:20.784 "nvme_io_md": false, 00:33:20.784 "write_zeroes": true, 00:33:20.784 "zcopy": false, 00:33:20.784 "get_zone_info": false, 00:33:20.784 "zone_management": false, 00:33:20.784 "zone_append": false, 00:33:20.784 "compare": false, 00:33:20.784 "compare_and_write": false, 00:33:20.784 "abort": false, 00:33:20.784 "seek_hole": true, 00:33:20.784 "seek_data": true, 00:33:20.784 "copy": false, 00:33:20.784 "nvme_iov_md": false 00:33:20.784 }, 00:33:20.784 "driver_specific": { 00:33:20.784 "lvol": { 00:33:20.784 "lvol_store_uuid": "0c36a6ed-73ba-427a-9593-23f206854dbb", 00:33:20.784 "base_bdev": "nvme0n1", 00:33:20.784 "thin_provision": true, 00:33:20.784 "num_allocated_clusters": 0, 00:33:20.784 "snapshot": false, 00:33:20.784 "clone": false, 00:33:20.784 "esnap_clone": false 00:33:20.784 } 00:33:20.784 } 00:33:20.784 } 00:33:20.784 ]' 00:33:20.784 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:33:21.044 12:02:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:21.303 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:21.562 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:21.562 { 00:33:21.562 "name": "b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3", 00:33:21.562 "aliases": [ 00:33:21.562 "lvs/nvme0n1p0" 00:33:21.562 ], 00:33:21.562 "product_name": "Logical Volume", 00:33:21.562 "block_size": 4096, 00:33:21.562 "num_blocks": 26476544, 00:33:21.562 "uuid": "b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3", 00:33:21.562 "assigned_rate_limits": { 00:33:21.562 "rw_ios_per_sec": 0, 00:33:21.562 "rw_mbytes_per_sec": 0, 00:33:21.562 "r_mbytes_per_sec": 0, 00:33:21.562 "w_mbytes_per_sec": 0 00:33:21.562 }, 00:33:21.562 "claimed": false, 00:33:21.562 "zoned": false, 00:33:21.562 "supported_io_types": { 00:33:21.562 "read": true, 00:33:21.562 "write": true, 00:33:21.562 "unmap": true, 00:33:21.562 "flush": false, 00:33:21.562 "reset": true, 00:33:21.562 "nvme_admin": false, 00:33:21.562 "nvme_io": false, 00:33:21.562 "nvme_io_md": false, 00:33:21.562 "write_zeroes": true, 00:33:21.562 "zcopy": false, 00:33:21.562 "get_zone_info": false, 00:33:21.562 "zone_management": false, 00:33:21.562 "zone_append": false, 00:33:21.562 "compare": false, 00:33:21.562 "compare_and_write": false, 00:33:21.562 "abort": false, 00:33:21.563 "seek_hole": true, 00:33:21.563 "seek_data": true, 00:33:21.563 "copy": false, 00:33:21.563 "nvme_iov_md": false 00:33:21.563 }, 00:33:21.563 "driver_specific": { 00:33:21.563 "lvol": { 00:33:21.563 "lvol_store_uuid": "0c36a6ed-73ba-427a-9593-23f206854dbb", 00:33:21.563 "base_bdev": "nvme0n1", 00:33:21.563 "thin_provision": true, 00:33:21.563 "num_allocated_clusters": 0, 00:33:21.563 "snapshot": false, 00:33:21.563 "clone": false, 00:33:21.563 "esnap_clone": false 00:33:21.563 } 00:33:21.563 } 00:33:21.563 } 00:33:21.563 ]' 00:33:21.563 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:21.563 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:21.563 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:21.822 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:22.082 12:02:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:22.342 { 00:33:22.342 "name": "b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3", 00:33:22.342 "aliases": [ 00:33:22.342 "lvs/nvme0n1p0" 00:33:22.342 ], 00:33:22.342 "product_name": "Logical Volume", 00:33:22.342 "block_size": 4096, 00:33:22.342 "num_blocks": 26476544, 00:33:22.342 "uuid": "b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3", 00:33:22.342 "assigned_rate_limits": { 00:33:22.342 "rw_ios_per_sec": 0, 00:33:22.342 "rw_mbytes_per_sec": 0, 00:33:22.342 "r_mbytes_per_sec": 0, 00:33:22.342 "w_mbytes_per_sec": 0 00:33:22.342 }, 00:33:22.342 "claimed": false, 00:33:22.342 "zoned": false, 00:33:22.342 "supported_io_types": { 00:33:22.342 "read": true, 00:33:22.342 "write": true, 00:33:22.342 "unmap": true, 00:33:22.342 "flush": false, 00:33:22.342 "reset": true, 00:33:22.342 "nvme_admin": false, 00:33:22.342 "nvme_io": false, 00:33:22.342 "nvme_io_md": false, 00:33:22.342 "write_zeroes": true, 00:33:22.342 "zcopy": false, 00:33:22.342 "get_zone_info": false, 00:33:22.342 "zone_management": false, 00:33:22.342 "zone_append": false, 00:33:22.342 "compare": false, 00:33:22.342 "compare_and_write": false, 00:33:22.342 "abort": false, 00:33:22.342 "seek_hole": true, 00:33:22.342 "seek_data": true, 00:33:22.342 "copy": false, 00:33:22.342 "nvme_iov_md": false 00:33:22.342 }, 00:33:22.342 "driver_specific": { 00:33:22.342 "lvol": { 00:33:22.342 "lvol_store_uuid": "0c36a6ed-73ba-427a-9593-23f206854dbb", 00:33:22.342 "base_bdev": "nvme0n1", 00:33:22.342 "thin_provision": true, 00:33:22.342 "num_allocated_clusters": 0, 00:33:22.342 "snapshot": false, 00:33:22.342 "clone": false, 00:33:22.342 "esnap_clone": false 00:33:22.342 } 00:33:22.342 } 00:33:22.342 } 00:33:22.342 ]' 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:33:22.342 12:02:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:33:22.343 12:02:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 --l2p_dram_limit 10' 00:33:22.343 12:02:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:33:22.343 12:02:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:33:22.343 12:02:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:22.343 12:02:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b5c08a92-d71c-41e2-9e53-f6fa6dc77bd3 --l2p_dram_limit 10 -c nvc0n1p0 00:33:22.608 [2024-07-25 12:02:19.428373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.428446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:22.608 [2024-07-25 12:02:19.428469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:22.608 [2024-07-25 12:02:19.428484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.428569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.428592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:22.608 [2024-07-25 12:02:19.428606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:22.608 [2024-07-25 12:02:19.428620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.428650] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:22.608 [2024-07-25 12:02:19.429652] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:22.608 [2024-07-25 12:02:19.429717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.429745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:22.608 [2024-07-25 12:02:19.429766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:33:22.608 [2024-07-25 12:02:19.429786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.429936] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 15ca77a5-df83-4821-8402-2eecbfb8ae8b 00:33:22.608 [2024-07-25 12:02:19.431150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.431206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:22.608 [2024-07-25 12:02:19.431230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:22.608 [2024-07-25 12:02:19.431252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.436274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.436328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:22.608 [2024-07-25 12:02:19.436351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:33:22.608 [2024-07-25 12:02:19.436363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.436490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.436512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:22.608 [2024-07-25 12:02:19.436528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:33:22.608 [2024-07-25 12:02:19.436541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.436636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.436656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:22.608 [2024-07-25 12:02:19.436675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:22.608 [2024-07-25 12:02:19.436688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.436762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:22.608 [2024-07-25 12:02:19.441525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.441577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:22.608 [2024-07-25 12:02:19.441596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:33:22.608 [2024-07-25 12:02:19.441610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.441659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.608 [2024-07-25 12:02:19.441679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:22.608 [2024-07-25 12:02:19.441707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:22.608 [2024-07-25 12:02:19.441724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.608 [2024-07-25 12:02:19.441785] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:22.608 [2024-07-25 12:02:19.441955] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:22.608 [2024-07-25 12:02:19.441976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:22.608 [2024-07-25 12:02:19.441996] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:22.608 [2024-07-25 12:02:19.442012] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:22.608 [2024-07-25 12:02:19.442028] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:22.608 [2024-07-25 12:02:19.442041] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:22.608 [2024-07-25 12:02:19.442059] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:22.608 [2024-07-25 12:02:19.442071] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:22.608 [2024-07-25 12:02:19.442084] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:22.608 [2024-07-25 12:02:19.442096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.609 [2024-07-25 12:02:19.442110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:22.609 [2024-07-25 12:02:19.442123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:33:22.609 [2024-07-25 12:02:19.442137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.609 [2024-07-25 12:02:19.442231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.609 [2024-07-25 12:02:19.442249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:22.609 [2024-07-25 12:02:19.442261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:22.609 [2024-07-25 12:02:19.442277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.609 [2024-07-25 12:02:19.442387] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:22.609 [2024-07-25 12:02:19.442408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:22.609 [2024-07-25 12:02:19.442433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:22.609 [2024-07-25 12:02:19.442449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:22.609 [2024-07-25 12:02:19.442486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:22.609 [2024-07-25 12:02:19.442652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:22.609 [2024-07-25 12:02:19.442667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:22.609 [2024-07-25 12:02:19.442706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:22.609 [2024-07-25 12:02:19.442726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:22.609 [2024-07-25 12:02:19.442738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:22.609 [2024-07-25 12:02:19.442751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:22.609 [2024-07-25 12:02:19.442762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:22.609 [2024-07-25 12:02:19.442775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:22.609 [2024-07-25 12:02:19.442801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:22.609 [2024-07-25 12:02:19.442811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:22.609 [2024-07-25 12:02:19.442835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.609 [2024-07-25 12:02:19.442859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:22.609 [2024-07-25 12:02:19.442872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.609 [2024-07-25 12:02:19.442895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:22.609 [2024-07-25 12:02:19.442906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.609 [2024-07-25 12:02:19.442929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:22.609 [2024-07-25 12:02:19.442942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:22.609 [2024-07-25 12:02:19.442965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:22.609 [2024-07-25 12:02:19.442975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:22.609 [2024-07-25 12:02:19.442989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:22.609 [2024-07-25 12:02:19.443000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:22.609 [2024-07-25 12:02:19.443013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:22.609 [2024-07-25 12:02:19.443024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:22.609 [2024-07-25 12:02:19.443038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:22.609 [2024-07-25 12:02:19.443049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:22.609 [2024-07-25 12:02:19.443061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.609 [2024-07-25 12:02:19.443072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:22.609 [2024-07-25 12:02:19.443085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:22.609 [2024-07-25 12:02:19.443095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.609 [2024-07-25 12:02:19.443107] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:22.609 [2024-07-25 12:02:19.443120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:22.609 [2024-07-25 12:02:19.443134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:22.609 [2024-07-25 12:02:19.443146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:22.609 [2024-07-25 12:02:19.443160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:22.609 [2024-07-25 12:02:19.443171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:22.609 [2024-07-25 12:02:19.443186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:22.609 [2024-07-25 12:02:19.443197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:22.609 [2024-07-25 12:02:19.443210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:22.609 [2024-07-25 12:02:19.443221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:22.609 [2024-07-25 12:02:19.443238] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:22.609 [2024-07-25 12:02:19.443256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:22.609 [2024-07-25 12:02:19.443271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:22.609 [2024-07-25 12:02:19.443284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:22.609 [2024-07-25 12:02:19.443298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:22.609 [2024-07-25 12:02:19.443309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:22.609 [2024-07-25 12:02:19.443323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:22.609 [2024-07-25 12:02:19.443335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:22.609 [2024-07-25 12:02:19.443350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:22.609 [2024-07-25 12:02:19.443362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:22.609 [2024-07-25 12:02:19.443375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:22.609 [2024-07-25 12:02:19.443387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:22.609 [2024-07-25 12:02:19.443402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:22.609 [2024-07-25 12:02:19.443414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:22.609 [2024-07-25 12:02:19.443428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:22.609 [2024-07-25 12:02:19.443440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:22.609 [2024-07-25 12:02:19.443454] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:22.609 [2024-07-25 12:02:19.443467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:22.609 [2024-07-25 12:02:19.443482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:22.609 [2024-07-25 12:02:19.443494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:22.609 [2024-07-25 12:02:19.443507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:22.609 [2024-07-25 12:02:19.443519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:22.609 [2024-07-25 12:02:19.443534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.609 [2024-07-25 12:02:19.443546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:22.609 [2024-07-25 12:02:19.443561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:33:22.609 [2024-07-25 12:02:19.443573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.609 [2024-07-25 12:02:19.443629] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:22.609 [2024-07-25 12:02:19.443654] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:25.894 [2024-07-25 12:02:22.223628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.894 [2024-07-25 12:02:22.223753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:25.895 [2024-07-25 12:02:22.223796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2779.997 ms 00:33:25.895 [2024-07-25 12:02:22.223820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.259214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.259279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:25.895 [2024-07-25 12:02:22.259306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.905 ms 00:33:25.895 [2024-07-25 12:02:22.259319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.259519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.259541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:25.895 [2024-07-25 12:02:22.259562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:25.895 [2024-07-25 12:02:22.259575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.298606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.298667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:25.895 [2024-07-25 12:02:22.298712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.967 ms 00:33:25.895 [2024-07-25 12:02:22.298730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.298801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.298818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:25.895 [2024-07-25 12:02:22.298840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:25.895 [2024-07-25 12:02:22.298852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.299281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.299301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:25.895 [2024-07-25 12:02:22.299318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:33:25.895 [2024-07-25 12:02:22.299330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.299483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.299505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:25.895 [2024-07-25 12:02:22.299520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:33:25.895 [2024-07-25 12:02:22.299532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.317105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.317166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:25.895 [2024-07-25 12:02:22.317190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.537 ms 00:33:25.895 [2024-07-25 12:02:22.317204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.330829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:25.895 [2024-07-25 12:02:22.333588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.333634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:25.895 [2024-07-25 12:02:22.333655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.252 ms 00:33:25.895 [2024-07-25 12:02:22.333671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.466988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.467105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:25.895 [2024-07-25 12:02:22.467131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 133.240 ms 00:33:25.895 [2024-07-25 12:02:22.467147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.467411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.467440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:25.895 [2024-07-25 12:02:22.467457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:33:25.895 [2024-07-25 12:02:22.467487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.509529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.509601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:25.895 [2024-07-25 12:02:22.509626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.937 ms 00:33:25.895 [2024-07-25 12:02:22.509649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.547475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.547554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:25.895 [2024-07-25 12:02:22.547579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.741 ms 00:33:25.895 [2024-07-25 12:02:22.547596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.548511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.548555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:25.895 [2024-07-25 12:02:22.548579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:33:25.895 [2024-07-25 12:02:22.548595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.654118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.654226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:25.895 [2024-07-25 12:02:22.654264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.440 ms 00:33:25.895 [2024-07-25 12:02:22.654287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.693313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.693378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:25.895 [2024-07-25 12:02:22.693401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.963 ms 00:33:25.895 [2024-07-25 12:02:22.693418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.731742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.731810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:25.895 [2024-07-25 12:02:22.731841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.263 ms 00:33:25.895 [2024-07-25 12:02:22.731858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.770544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.770669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:25.895 [2024-07-25 12:02:22.770715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.621 ms 00:33:25.895 [2024-07-25 12:02:22.770739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.770852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.770881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:25.895 [2024-07-25 12:02:22.770898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:25.895 [2024-07-25 12:02:22.770921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.771095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.895 [2024-07-25 12:02:22.771128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:25.895 [2024-07-25 12:02:22.771144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:25.895 [2024-07-25 12:02:22.771160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.895 [2024-07-25 12:02:22.772451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3343.479 ms, result 0 00:33:25.895 { 00:33:25.895 "name": "ftl0", 00:33:25.895 "uuid": "15ca77a5-df83-4821-8402-2eecbfb8ae8b" 00:33:25.895 } 00:33:25.895 12:02:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:33:25.895 12:02:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:26.154 12:02:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:33:26.154 12:02:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:33:26.154 12:02:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:33:26.413 /dev/nbd0 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:33:26.413 1+0 records in 00:33:26.413 1+0 records out 00:33:26.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000980941 s, 4.2 MB/s 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:33:26.413 12:02:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:33:26.672 [2024-07-25 12:02:23.511119] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:26.672 [2024-07-25 12:02:23.511282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82358 ] 00:33:26.672 [2024-07-25 12:02:23.679646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.931 [2024-07-25 12:02:23.869446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.449  Copying: 161/1024 [MB] (161 MBps) Copying: 335/1024 [MB] (173 MBps) Copying: 508/1024 [MB] (173 MBps) Copying: 682/1024 [MB] (173 MBps) Copying: 851/1024 [MB] (169 MBps) Copying: 1011/1024 [MB] (160 MBps) Copying: 1024/1024 [MB] (average 168 MBps) 00:33:34.449 00:33:34.449 12:02:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:37.005 12:02:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:33:37.005 [2024-07-25 12:02:33.664841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:37.006 [2024-07-25 12:02:33.664990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82460 ] 00:33:37.006 [2024-07-25 12:02:33.824664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.006 [2024-07-25 12:02:34.011523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.255  Copying: 17/1024 [MB] (17 MBps) Copying: 34/1024 [MB] (16 MBps) Copying: 50/1024 [MB] (16 MBps) Copying: 68/1024 [MB] (17 MBps) Copying: 86/1024 [MB] (17 MBps) Copying: 103/1024 [MB] (16 MBps) Copying: 120/1024 [MB] (17 MBps) Copying: 136/1024 [MB] (16 MBps) Copying: 151/1024 [MB] (15 MBps) Copying: 168/1024 [MB] (16 MBps) Copying: 185/1024 [MB] (17 MBps) Copying: 201/1024 [MB] (16 MBps) Copying: 216/1024 [MB] (14 MBps) Copying: 232/1024 [MB] (15 MBps) Copying: 248/1024 [MB] (16 MBps) Copying: 265/1024 [MB] (16 MBps) Copying: 281/1024 [MB] (15 MBps) Copying: 297/1024 [MB] (16 MBps) Copying: 314/1024 [MB] (16 MBps) Copying: 331/1024 [MB] (17 MBps) Copying: 347/1024 [MB] (16 MBps) Copying: 364/1024 [MB] (16 MBps) Copying: 381/1024 [MB] (17 MBps) Copying: 396/1024 [MB] (15 MBps) Copying: 410/1024 [MB] (13 MBps) Copying: 425/1024 [MB] (15 MBps) Copying: 440/1024 [MB] (15 MBps) Copying: 457/1024 [MB] (16 MBps) Copying: 472/1024 [MB] (15 MBps) Copying: 489/1024 [MB] (16 MBps) Copying: 504/1024 [MB] (15 MBps) Copying: 519/1024 [MB] (14 MBps) Copying: 534/1024 [MB] (14 MBps) Copying: 550/1024 [MB] (16 MBps) Copying: 566/1024 [MB] (16 MBps) Copying: 583/1024 [MB] (16 MBps) Copying: 599/1024 [MB] (16 MBps) Copying: 615/1024 [MB] (16 MBps) Copying: 632/1024 [MB] (16 MBps) Copying: 648/1024 [MB] (16 MBps) Copying: 664/1024 [MB] (16 MBps) Copying: 681/1024 [MB] (17 MBps) Copying: 699/1024 [MB] (17 MBps) Copying: 716/1024 [MB] (17 MBps) Copying: 733/1024 [MB] (16 MBps) Copying: 749/1024 [MB] (16 MBps) Copying: 765/1024 [MB] (15 MBps) Copying: 781/1024 [MB] (15 MBps) Copying: 797/1024 [MB] (15 MBps) Copying: 813/1024 [MB] (16 MBps) Copying: 831/1024 [MB] (17 MBps) Copying: 848/1024 [MB] (16 MBps) Copying: 864/1024 [MB] (16 MBps) Copying: 880/1024 [MB] (15 MBps) Copying: 897/1024 [MB] (17 MBps) Copying: 912/1024 [MB] (15 MBps) Copying: 930/1024 [MB] (17 MBps) Copying: 947/1024 [MB] (16 MBps) Copying: 963/1024 [MB] (16 MBps) Copying: 980/1024 [MB] (16 MBps) Copying: 997/1024 [MB] (17 MBps) Copying: 1014/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:34:41.255 00:34:41.255 12:03:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:34:41.255 12:03:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:34:41.255 12:03:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:34:41.513 [2024-07-25 12:03:38.465562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.465628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:41.513 [2024-07-25 12:03:38.465669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:41.513 [2024-07-25 12:03:38.465683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.513 [2024-07-25 12:03:38.465760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:41.513 [2024-07-25 12:03:38.469138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.469185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:41.513 [2024-07-25 12:03:38.469203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.350 ms 00:34:41.513 [2024-07-25 12:03:38.469217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.513 [2024-07-25 12:03:38.470814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.470873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:41.513 [2024-07-25 12:03:38.470893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:34:41.513 [2024-07-25 12:03:38.470921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.513 [2024-07-25 12:03:38.486548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.486605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:41.513 [2024-07-25 12:03:38.486626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.598 ms 00:34:41.513 [2024-07-25 12:03:38.486642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.513 [2024-07-25 12:03:38.493393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.493462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:41.513 [2024-07-25 12:03:38.493481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.677 ms 00:34:41.513 [2024-07-25 12:03:38.493496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.513 [2024-07-25 12:03:38.525245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.525334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:41.513 [2024-07-25 12:03:38.525355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.640 ms 00:34:41.513 [2024-07-25 12:03:38.525370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.513 [2024-07-25 12:03:38.544244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.544321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:41.513 [2024-07-25 12:03:38.544342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.813 ms 00:34:41.513 [2024-07-25 12:03:38.544358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.513 [2024-07-25 12:03:38.544567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.513 [2024-07-25 12:03:38.544596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:41.513 [2024-07-25 12:03:38.544611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:34:41.513 [2024-07-25 12:03:38.544625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.773 [2024-07-25 12:03:38.576373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.773 [2024-07-25 12:03:38.576436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:34:41.773 [2024-07-25 12:03:38.576457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.721 ms 00:34:41.773 [2024-07-25 12:03:38.576471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.773 [2024-07-25 12:03:38.607569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.773 [2024-07-25 12:03:38.607635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:34:41.773 [2024-07-25 12:03:38.607655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.041 ms 00:34:41.773 [2024-07-25 12:03:38.607669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.773 [2024-07-25 12:03:38.638496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.773 [2024-07-25 12:03:38.638561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:41.773 [2024-07-25 12:03:38.638581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.745 ms 00:34:41.773 [2024-07-25 12:03:38.638596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.773 [2024-07-25 12:03:38.669473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.773 [2024-07-25 12:03:38.669534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:41.773 [2024-07-25 12:03:38.669554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.711 ms 00:34:41.773 [2024-07-25 12:03:38.669569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.773 [2024-07-25 12:03:38.669623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:41.773 [2024-07-25 12:03:38.669653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.669997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:41.773 [2024-07-25 12:03:38.670159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.670999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:41.774 [2024-07-25 12:03:38.671400] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:41.774 [2024-07-25 12:03:38.671416] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ca77a5-df83-4821-8402-2eecbfb8ae8b 00:34:41.774 [2024-07-25 12:03:38.671441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:41.774 [2024-07-25 12:03:38.671467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:41.774 [2024-07-25 12:03:38.671488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:41.774 [2024-07-25 12:03:38.671501] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:41.774 [2024-07-25 12:03:38.671516] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:41.774 [2024-07-25 12:03:38.671536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:41.774 [2024-07-25 12:03:38.671555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:41.774 [2024-07-25 12:03:38.671566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:41.774 [2024-07-25 12:03:38.671578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:41.774 [2024-07-25 12:03:38.671590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.774 [2024-07-25 12:03:38.671605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:41.774 [2024-07-25 12:03:38.671618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:34:41.774 [2024-07-25 12:03:38.671638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.774 [2024-07-25 12:03:38.688400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.774 [2024-07-25 12:03:38.688451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:41.774 [2024-07-25 12:03:38.688471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.646 ms 00:34:41.774 [2024-07-25 12:03:38.688486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.774 [2024-07-25 12:03:38.688968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.774 [2024-07-25 12:03:38.689019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:41.774 [2024-07-25 12:03:38.689037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:34:41.774 [2024-07-25 12:03:38.689052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.774 [2024-07-25 12:03:38.741129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.774 [2024-07-25 12:03:38.741200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:41.774 [2024-07-25 12:03:38.741220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.774 [2024-07-25 12:03:38.741235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.774 [2024-07-25 12:03:38.741330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.774 [2024-07-25 12:03:38.741351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:41.774 [2024-07-25 12:03:38.741365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.774 [2024-07-25 12:03:38.741378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.774 [2024-07-25 12:03:38.741517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.774 [2024-07-25 12:03:38.741544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:41.774 [2024-07-25 12:03:38.741558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.774 [2024-07-25 12:03:38.741573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.774 [2024-07-25 12:03:38.741599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.774 [2024-07-25 12:03:38.741619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:41.774 [2024-07-25 12:03:38.741632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.774 [2024-07-25 12:03:38.741646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.032 [2024-07-25 12:03:38.840952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.032 [2024-07-25 12:03:38.841042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:42.032 [2024-07-25 12:03:38.841064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.032 [2024-07-25 12:03:38.841079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.032 [2024-07-25 12:03:38.925343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.032 [2024-07-25 12:03:38.925410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:42.032 [2024-07-25 12:03:38.925431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.032 [2024-07-25 12:03:38.925447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.032 [2024-07-25 12:03:38.925587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.033 [2024-07-25 12:03:38.925614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:42.033 [2024-07-25 12:03:38.925628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.033 [2024-07-25 12:03:38.925641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.033 [2024-07-25 12:03:38.925733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.033 [2024-07-25 12:03:38.925765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:42.033 [2024-07-25 12:03:38.925778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.033 [2024-07-25 12:03:38.925793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.033 [2024-07-25 12:03:38.925938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.033 [2024-07-25 12:03:38.925973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:42.033 [2024-07-25 12:03:38.925996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.033 [2024-07-25 12:03:38.926021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.033 [2024-07-25 12:03:38.926081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.033 [2024-07-25 12:03:38.926111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:42.033 [2024-07-25 12:03:38.926129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.033 [2024-07-25 12:03:38.926154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.033 [2024-07-25 12:03:38.926218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.033 [2024-07-25 12:03:38.926241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:42.033 [2024-07-25 12:03:38.926258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.033 [2024-07-25 12:03:38.926271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.033 [2024-07-25 12:03:38.926343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.033 [2024-07-25 12:03:38.926371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:42.033 [2024-07-25 12:03:38.926386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.033 [2024-07-25 12:03:38.926427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.033 [2024-07-25 12:03:38.926593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 461.003 ms, result 0 00:34:42.033 true 00:34:42.033 12:03:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82216 00:34:42.033 12:03:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82216 00:34:42.033 12:03:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:34:42.300 [2024-07-25 12:03:39.067439] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:42.300 [2024-07-25 12:03:39.067611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83110 ] 00:34:42.300 [2024-07-25 12:03:39.243450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.574 [2024-07-25 12:03:39.443528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.717  Copying: 162/1024 [MB] (162 MBps) Copying: 310/1024 [MB] (147 MBps) Copying: 452/1024 [MB] (142 MBps) Copying: 619/1024 [MB] (167 MBps) Copying: 781/1024 [MB] (161 MBps) Copying: 934/1024 [MB] (153 MBps) Copying: 1024/1024 [MB] (average 156 MBps) 00:34:50.717 00:34:50.717 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82216 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:34:50.717 12:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:50.974 [2024-07-25 12:03:47.777714] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:50.974 [2024-07-25 12:03:47.777869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83196 ] 00:34:50.974 [2024-07-25 12:03:47.940206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.232 [2024-07-25 12:03:48.139344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.490 [2024-07-25 12:03:48.495571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:51.490 [2024-07-25 12:03:48.495662] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:51.748 [2024-07-25 12:03:48.562267] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:51.748 [2024-07-25 12:03:48.562568] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:51.748 [2024-07-25 12:03:48.562726] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:52.007 [2024-07-25 12:03:48.809729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.809801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:52.007 [2024-07-25 12:03:48.809822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:52.007 [2024-07-25 12:03:48.809836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.809940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.809966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:52.007 [2024-07-25 12:03:48.809980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:52.007 [2024-07-25 12:03:48.809992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.810024] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:52.007 [2024-07-25 12:03:48.811157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:52.007 [2024-07-25 12:03:48.811231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.811257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:52.007 [2024-07-25 12:03:48.811281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:34:52.007 [2024-07-25 12:03:48.811301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.812710] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:52.007 [2024-07-25 12:03:48.836208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.836324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:52.007 [2024-07-25 12:03:48.836376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.510 ms 00:34:52.007 [2024-07-25 12:03:48.836414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.836574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.836620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:52.007 [2024-07-25 12:03:48.836646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:34:52.007 [2024-07-25 12:03:48.836666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.841925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.842000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:52.007 [2024-07-25 12:03:48.842023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:34:52.007 [2024-07-25 12:03:48.842035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.842162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.842185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:52.007 [2024-07-25 12:03:48.842199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:34:52.007 [2024-07-25 12:03:48.842210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.842301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.842321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:52.007 [2024-07-25 12:03:48.842339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:52.007 [2024-07-25 12:03:48.842351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.842413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:52.007 [2024-07-25 12:03:48.847380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.847450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:52.007 [2024-07-25 12:03:48.847478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:34:52.007 [2024-07-25 12:03:48.847498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.847573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.007 [2024-07-25 12:03:48.847603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:52.007 [2024-07-25 12:03:48.847619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:34:52.007 [2024-07-25 12:03:48.847631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.007 [2024-07-25 12:03:48.847753] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:52.007 [2024-07-25 12:03:48.847798] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:52.007 [2024-07-25 12:03:48.847851] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:52.007 [2024-07-25 12:03:48.847885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:34:52.007 [2024-07-25 12:03:48.847994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:52.007 [2024-07-25 12:03:48.848010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:52.008 [2024-07-25 12:03:48.848024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:34:52.008 [2024-07-25 12:03:48.848039] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848052] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848071] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:52.008 [2024-07-25 12:03:48.848082] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:52.008 [2024-07-25 12:03:48.848093] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:52.008 [2024-07-25 12:03:48.848103] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:52.008 [2024-07-25 12:03:48.848116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.008 [2024-07-25 12:03:48.848127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:52.008 [2024-07-25 12:03:48.848139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:34:52.008 [2024-07-25 12:03:48.848150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.008 [2024-07-25 12:03:48.848248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.008 [2024-07-25 12:03:48.848264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:52.008 [2024-07-25 12:03:48.848282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:34:52.008 [2024-07-25 12:03:48.848293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.008 [2024-07-25 12:03:48.848400] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:52.008 [2024-07-25 12:03:48.848425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:52.008 [2024-07-25 12:03:48.848438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:52.008 [2024-07-25 12:03:48.848473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:52.008 [2024-07-25 12:03:48.848504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:52.008 [2024-07-25 12:03:48.848524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:52.008 [2024-07-25 12:03:48.848535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:52.008 [2024-07-25 12:03:48.848545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:52.008 [2024-07-25 12:03:48.848555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:52.008 [2024-07-25 12:03:48.848566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:52.008 [2024-07-25 12:03:48.848576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:52.008 [2024-07-25 12:03:48.848613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:52.008 [2024-07-25 12:03:48.848644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848655] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:52.008 [2024-07-25 12:03:48.848676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:52.008 [2024-07-25 12:03:48.848727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:52.008 [2024-07-25 12:03:48.848758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:52.008 [2024-07-25 12:03:48.848789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:52.008 [2024-07-25 12:03:48.848810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:52.008 [2024-07-25 12:03:48.848820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:52.008 [2024-07-25 12:03:48.848830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:52.008 [2024-07-25 12:03:48.848847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:52.008 [2024-07-25 12:03:48.848867] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:52.008 [2024-07-25 12:03:48.848883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:52.008 [2024-07-25 12:03:48.848904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:52.008 [2024-07-25 12:03:48.848914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848924] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:52.008 [2024-07-25 12:03:48.848935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:52.008 [2024-07-25 12:03:48.848946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:52.008 [2024-07-25 12:03:48.848957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:52.008 [2024-07-25 12:03:48.848977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:52.008 [2024-07-25 12:03:48.848988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:52.008 [2024-07-25 12:03:48.848998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:52.008 [2024-07-25 12:03:48.849009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:52.008 [2024-07-25 12:03:48.849019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:52.008 [2024-07-25 12:03:48.849029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:52.008 [2024-07-25 12:03:48.849041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:52.008 [2024-07-25 12:03:48.849055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:52.008 [2024-07-25 12:03:48.849069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:52.008 [2024-07-25 12:03:48.849081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:52.008 [2024-07-25 12:03:48.849092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:52.008 [2024-07-25 12:03:48.849104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:52.008 [2024-07-25 12:03:48.849115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:52.008 [2024-07-25 12:03:48.849126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:52.008 [2024-07-25 12:03:48.849138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:52.008 [2024-07-25 12:03:48.849149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:52.008 [2024-07-25 12:03:48.849160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:52.008 [2024-07-25 12:03:48.849172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:52.008 [2024-07-25 12:03:48.849183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:52.008 [2024-07-25 12:03:48.849195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:52.008 [2024-07-25 12:03:48.849207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:52.008 [2024-07-25 12:03:48.849219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:52.008 [2024-07-25 12:03:48.849230] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:52.008 [2024-07-25 12:03:48.849242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:52.009 [2024-07-25 12:03:48.849255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:52.009 [2024-07-25 12:03:48.849266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:52.009 [2024-07-25 12:03:48.849278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:52.009 [2024-07-25 12:03:48.849289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:52.009 [2024-07-25 12:03:48.849302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.849313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:52.009 [2024-07-25 12:03:48.849325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:34:52.009 [2024-07-25 12:03:48.849336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.892886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.893169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:52.009 [2024-07-25 12:03:48.893296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.482 ms 00:34:52.009 [2024-07-25 12:03:48.893349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.893586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.893748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:52.009 [2024-07-25 12:03:48.893906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:34:52.009 [2024-07-25 12:03:48.893980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.935209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.935477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:52.009 [2024-07-25 12:03:48.935632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.022 ms 00:34:52.009 [2024-07-25 12:03:48.935686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.935832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.935921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:52.009 [2024-07-25 12:03:48.936041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:52.009 [2024-07-25 12:03:48.936158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.936636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.936800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:52.009 [2024-07-25 12:03:48.936963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:34:52.009 [2024-07-25 12:03:48.937094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.937378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.937556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:52.009 [2024-07-25 12:03:48.937712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:34:52.009 [2024-07-25 12:03:48.937770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.955168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.955420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:52.009 [2024-07-25 12:03:48.955452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.242 ms 00:34:52.009 [2024-07-25 12:03:48.955465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:48.972785] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:52.009 [2024-07-25 12:03:48.972871] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:52.009 [2024-07-25 12:03:48.972902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:48.972916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:52.009 [2024-07-25 12:03:48.972932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.247 ms 00:34:52.009 [2024-07-25 12:03:48.972943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:49.003924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:49.004024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:52.009 [2024-07-25 12:03:49.004046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.885 ms 00:34:52.009 [2024-07-25 12:03:49.004058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:49.020400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:49.020468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:52.009 [2024-07-25 12:03:49.020489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.241 ms 00:34:52.009 [2024-07-25 12:03:49.020500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:49.036355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:49.036426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:52.009 [2024-07-25 12:03:49.036447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.785 ms 00:34:52.009 [2024-07-25 12:03:49.036459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.009 [2024-07-25 12:03:49.037369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.009 [2024-07-25 12:03:49.037407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:52.009 [2024-07-25 12:03:49.037424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:34:52.009 [2024-07-25 12:03:49.037435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.267 [2024-07-25 12:03:49.116079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.267 [2024-07-25 12:03:49.116163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:52.268 [2024-07-25 12:03:49.116185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.613 ms 00:34:52.268 [2024-07-25 12:03:49.116198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.129359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:52.268 [2024-07-25 12:03:49.132220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.268 [2024-07-25 12:03:49.132271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:52.268 [2024-07-25 12:03:49.132292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.935 ms 00:34:52.268 [2024-07-25 12:03:49.132304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.132443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.268 [2024-07-25 12:03:49.132468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:52.268 [2024-07-25 12:03:49.132482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:52.268 [2024-07-25 12:03:49.132494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.132594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.268 [2024-07-25 12:03:49.132614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:52.268 [2024-07-25 12:03:49.132627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:52.268 [2024-07-25 12:03:49.132639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.132671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.268 [2024-07-25 12:03:49.132687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:52.268 [2024-07-25 12:03:49.132721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:52.268 [2024-07-25 12:03:49.132733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.132775] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:52.268 [2024-07-25 12:03:49.132793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.268 [2024-07-25 12:03:49.132805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:52.268 [2024-07-25 12:03:49.132817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:34:52.268 [2024-07-25 12:03:49.132828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.164894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.268 [2024-07-25 12:03:49.164993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:52.268 [2024-07-25 12:03:49.165015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.033 ms 00:34:52.268 [2024-07-25 12:03:49.165029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.165148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.268 [2024-07-25 12:03:49.165169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:52.268 [2024-07-25 12:03:49.165182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:52.268 [2024-07-25 12:03:49.165194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.268 [2024-07-25 12:03:49.166508] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.209 ms, result 0 00:35:31.779  Copying: 28/1024 [MB] (28 MBps) Copying: 61/1024 [MB] (32 MBps) Copying: 89/1024 [MB] (28 MBps) Copying: 118/1024 [MB] (28 MBps) Copying: 148/1024 [MB] (30 MBps) Copying: 181/1024 [MB] (32 MBps) Copying: 212/1024 [MB] (31 MBps) Copying: 239/1024 [MB] (27 MBps) Copying: 269/1024 [MB] (29 MBps) Copying: 298/1024 [MB] (28 MBps) Copying: 324/1024 [MB] (26 MBps) Copying: 348/1024 [MB] (23 MBps) Copying: 378/1024 [MB] (29 MBps) Copying: 405/1024 [MB] (27 MBps) Copying: 432/1024 [MB] (26 MBps) Copying: 457/1024 [MB] (25 MBps) Copying: 484/1024 [MB] (26 MBps) Copying: 510/1024 [MB] (26 MBps) Copying: 538/1024 [MB] (27 MBps) Copying: 562/1024 [MB] (23 MBps) Copying: 584/1024 [MB] (22 MBps) Copying: 610/1024 [MB] (26 MBps) Copying: 633/1024 [MB] (22 MBps) Copying: 659/1024 [MB] (26 MBps) Copying: 687/1024 [MB] (28 MBps) Copying: 714/1024 [MB] (27 MBps) Copying: 740/1024 [MB] (25 MBps) Copying: 765/1024 [MB] (25 MBps) Copying: 786/1024 [MB] (21 MBps) Copying: 810/1024 [MB] (24 MBps) Copying: 833/1024 [MB] (22 MBps) Copying: 858/1024 [MB] (25 MBps) Copying: 887/1024 [MB] (28 MBps) Copying: 915/1024 [MB] (28 MBps) Copying: 942/1024 [MB] (27 MBps) Copying: 966/1024 [MB] (23 MBps) Copying: 995/1024 [MB] (28 MBps) Copying: 1023/1024 [MB] (27 MBps) Copying: 1048288/1048576 [kB] (712 kBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-25 12:04:28.599797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.599896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:31.779 [2024-07-25 12:04:28.599932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:31.779 [2024-07-25 12:04:28.599947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.779 [2024-07-25 12:04:28.603229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:31.779 [2024-07-25 12:04:28.607759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.607820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:31.779 [2024-07-25 12:04:28.607841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:35:31.779 [2024-07-25 12:04:28.607853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.779 [2024-07-25 12:04:28.623250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.623394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:31.779 [2024-07-25 12:04:28.623426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.064 ms 00:35:31.779 [2024-07-25 12:04:28.623444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.779 [2024-07-25 12:04:28.648561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.648717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:31.779 [2024-07-25 12:04:28.648760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.074 ms 00:35:31.779 [2024-07-25 12:04:28.648786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.779 [2024-07-25 12:04:28.657333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.657431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:31.779 [2024-07-25 12:04:28.657484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.463 ms 00:35:31.779 [2024-07-25 12:04:28.657509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.779 [2024-07-25 12:04:28.696456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.696527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:31.779 [2024-07-25 12:04:28.696548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.792 ms 00:35:31.779 [2024-07-25 12:04:28.696560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.779 [2024-07-25 12:04:28.715201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.715287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:31.779 [2024-07-25 12:04:28.715308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.550 ms 00:35:31.779 [2024-07-25 12:04:28.715321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.779 [2024-07-25 12:04:28.801642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.779 [2024-07-25 12:04:28.801758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:31.779 [2024-07-25 12:04:28.801782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.223 ms 00:35:31.779 [2024-07-25 12:04:28.801808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.038 [2024-07-25 12:04:28.846846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.038 [2024-07-25 12:04:28.846974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:35:32.038 [2024-07-25 12:04:28.847010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.989 ms 00:35:32.038 [2024-07-25 12:04:28.847032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.038 [2024-07-25 12:04:28.894718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.038 [2024-07-25 12:04:28.894840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:35:32.038 [2024-07-25 12:04:28.894879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.538 ms 00:35:32.038 [2024-07-25 12:04:28.894903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.038 [2024-07-25 12:04:28.943221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.038 [2024-07-25 12:04:28.943355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:32.038 [2024-07-25 12:04:28.943392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.179 ms 00:35:32.038 [2024-07-25 12:04:28.943412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.038 [2024-07-25 12:04:28.981367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.038 [2024-07-25 12:04:28.981476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:32.038 [2024-07-25 12:04:28.981510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.728 ms 00:35:32.038 [2024-07-25 12:04:28.981530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.038 [2024-07-25 12:04:28.981648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:32.038 [2024-07-25 12:04:28.981724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130816 / 261120 wr_cnt: 1 state: open 00:35:32.038 [2024-07-25 12:04:28.981747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:32.038 [2024-07-25 12:04:28.981760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:32.038 [2024-07-25 12:04:28.981772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:32.038 [2024-07-25 12:04:28.981784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:32.038 [2024-07-25 12:04:28.981796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:32.038 [2024-07-25 12:04:28.981814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:32.038 [2024-07-25 12:04:28.981833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:32.038 [2024-07-25 12:04:28.981847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.981998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.982997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:32.039 [2024-07-25 12:04:28.983100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:32.040 [2024-07-25 12:04:28.983111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:32.040 [2024-07-25 12:04:28.983123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:32.040 [2024-07-25 12:04:28.983134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:32.040 [2024-07-25 12:04:28.983146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:32.040 [2024-07-25 12:04:28.983169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:32.040 [2024-07-25 12:04:28.983192] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:32.040 [2024-07-25 12:04:28.983204] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ca77a5-df83-4821-8402-2eecbfb8ae8b 00:35:32.040 [2024-07-25 12:04:28.983221] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130816 00:35:32.040 [2024-07-25 12:04:28.983232] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131776 00:35:32.040 [2024-07-25 12:04:28.983246] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130816 00:35:32.040 [2024-07-25 12:04:28.983259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:35:32.040 [2024-07-25 12:04:28.983270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:32.040 [2024-07-25 12:04:28.983281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:32.040 [2024-07-25 12:04:28.983292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:32.040 [2024-07-25 12:04:28.983303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:32.040 [2024-07-25 12:04:28.983312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:32.040 [2024-07-25 12:04:28.983325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.040 [2024-07-25 12:04:28.983336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:32.040 [2024-07-25 12:04:28.983364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:35:32.040 [2024-07-25 12:04:28.983375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.040 [2024-07-25 12:04:29.002038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.040 [2024-07-25 12:04:29.002115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:32.040 [2024-07-25 12:04:29.002137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.580 ms 00:35:32.040 [2024-07-25 12:04:29.002149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.040 [2024-07-25 12:04:29.002637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:32.040 [2024-07-25 12:04:29.002662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:32.040 [2024-07-25 12:04:29.002676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:35:32.040 [2024-07-25 12:04:29.002688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.040 [2024-07-25 12:04:29.043686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.040 [2024-07-25 12:04:29.043782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:32.040 [2024-07-25 12:04:29.043803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.040 [2024-07-25 12:04:29.043815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.040 [2024-07-25 12:04:29.043905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.040 [2024-07-25 12:04:29.043921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:32.040 [2024-07-25 12:04:29.043933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.040 [2024-07-25 12:04:29.043944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.040 [2024-07-25 12:04:29.044065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.040 [2024-07-25 12:04:29.044091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:32.040 [2024-07-25 12:04:29.044104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.040 [2024-07-25 12:04:29.044116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.040 [2024-07-25 12:04:29.044139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.040 [2024-07-25 12:04:29.044159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:32.040 [2024-07-25 12:04:29.044172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.040 [2024-07-25 12:04:29.044182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.298 [2024-07-25 12:04:29.153907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.298 [2024-07-25 12:04:29.154025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:32.298 [2024-07-25 12:04:29.154061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.298 [2024-07-25 12:04:29.154083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.298 [2024-07-25 12:04:29.252375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.298 [2024-07-25 12:04:29.252460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:32.298 [2024-07-25 12:04:29.252482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.298 [2024-07-25 12:04:29.252494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.298 [2024-07-25 12:04:29.252611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.299 [2024-07-25 12:04:29.252643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:32.299 [2024-07-25 12:04:29.252656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.299 [2024-07-25 12:04:29.252667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.299 [2024-07-25 12:04:29.252750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.299 [2024-07-25 12:04:29.252769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:32.299 [2024-07-25 12:04:29.252782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.299 [2024-07-25 12:04:29.252807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.299 [2024-07-25 12:04:29.252929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.299 [2024-07-25 12:04:29.252954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:32.299 [2024-07-25 12:04:29.252967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.299 [2024-07-25 12:04:29.252978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.299 [2024-07-25 12:04:29.253050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.299 [2024-07-25 12:04:29.253070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:32.299 [2024-07-25 12:04:29.253082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.299 [2024-07-25 12:04:29.253093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.299 [2024-07-25 12:04:29.253140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.299 [2024-07-25 12:04:29.253156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:32.299 [2024-07-25 12:04:29.253174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.299 [2024-07-25 12:04:29.253185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.299 [2024-07-25 12:04:29.253238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:32.299 [2024-07-25 12:04:29.253255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:32.299 [2024-07-25 12:04:29.253268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:32.299 [2024-07-25 12:04:29.253278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:32.299 [2024-07-25 12:04:29.253451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 657.166 ms, result 0 00:35:34.198 00:35:34.198 00:35:34.199 12:04:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:36.729 12:04:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:36.729 [2024-07-25 12:04:33.357005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:36.729 [2024-07-25 12:04:33.357214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83636 ] 00:35:36.729 [2024-07-25 12:04:33.545387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.987 [2024-07-25 12:04:33.884381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.244 [2024-07-25 12:04:34.235535] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:37.244 [2024-07-25 12:04:34.235640] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:37.503 [2024-07-25 12:04:34.397463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.397543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:37.503 [2024-07-25 12:04:34.397566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:37.503 [2024-07-25 12:04:34.397579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.397663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.397682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:37.503 [2024-07-25 12:04:34.397721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:35:37.503 [2024-07-25 12:04:34.397740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.397794] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:37.503 [2024-07-25 12:04:34.398883] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:37.503 [2024-07-25 12:04:34.398932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.398948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:37.503 [2024-07-25 12:04:34.398962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:35:37.503 [2024-07-25 12:04:34.398974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.400167] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:37.503 [2024-07-25 12:04:34.417483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.417582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:37.503 [2024-07-25 12:04:34.417605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.310 ms 00:35:37.503 [2024-07-25 12:04:34.417617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.417801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.417835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:37.503 [2024-07-25 12:04:34.417850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:35:37.503 [2024-07-25 12:04:34.417861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.423175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.423252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:37.503 [2024-07-25 12:04:34.423274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.160 ms 00:35:37.503 [2024-07-25 12:04:34.423287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.423424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.423455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:37.503 [2024-07-25 12:04:34.423474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:35:37.503 [2024-07-25 12:04:34.423486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.423585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.423604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:37.503 [2024-07-25 12:04:34.423618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:37.503 [2024-07-25 12:04:34.423629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.423666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:37.503 [2024-07-25 12:04:34.428162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.428232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:37.503 [2024-07-25 12:04:34.428252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:35:37.503 [2024-07-25 12:04:34.428265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.428331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.428348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:37.503 [2024-07-25 12:04:34.428363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:37.503 [2024-07-25 12:04:34.428375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.428489] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:37.503 [2024-07-25 12:04:34.428530] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:37.503 [2024-07-25 12:04:34.428577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:37.503 [2024-07-25 12:04:34.428602] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:35:37.503 [2024-07-25 12:04:34.428744] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:37.503 [2024-07-25 12:04:34.428774] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:37.503 [2024-07-25 12:04:34.428806] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:35:37.503 [2024-07-25 12:04:34.428833] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:37.503 [2024-07-25 12:04:34.428856] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:37.503 [2024-07-25 12:04:34.428878] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:37.503 [2024-07-25 12:04:34.428897] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:37.503 [2024-07-25 12:04:34.428913] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:37.503 [2024-07-25 12:04:34.428929] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:37.503 [2024-07-25 12:04:34.428948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.428975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:37.503 [2024-07-25 12:04:34.428996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:35:37.503 [2024-07-25 12:04:34.429015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.429147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.429186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:37.503 [2024-07-25 12:04:34.429201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:35:37.503 [2024-07-25 12:04:34.429213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.429326] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:37.503 [2024-07-25 12:04:34.429345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:37.503 [2024-07-25 12:04:34.429366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:37.503 [2024-07-25 12:04:34.429401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:37.503 [2024-07-25 12:04:34.429434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:37.503 [2024-07-25 12:04:34.429455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:37.503 [2024-07-25 12:04:34.429466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:37.503 [2024-07-25 12:04:34.429477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:37.503 [2024-07-25 12:04:34.429487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:37.503 [2024-07-25 12:04:34.429498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:37.503 [2024-07-25 12:04:34.429509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:37.503 [2024-07-25 12:04:34.429531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:37.503 [2024-07-25 12:04:34.429579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:37.503 [2024-07-25 12:04:34.429612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:37.503 [2024-07-25 12:04:34.429644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429655] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:37.503 [2024-07-25 12:04:34.429676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:37.503 [2024-07-25 12:04:34.429728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:37.503 [2024-07-25 12:04:34.429751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:37.503 [2024-07-25 12:04:34.429762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:37.503 [2024-07-25 12:04:34.429773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:37.503 [2024-07-25 12:04:34.429783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:37.503 [2024-07-25 12:04:34.429794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:37.503 [2024-07-25 12:04:34.429804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:37.503 [2024-07-25 12:04:34.429828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:37.503 [2024-07-25 12:04:34.429838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429848] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:37.503 [2024-07-25 12:04:34.429863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:37.503 [2024-07-25 12:04:34.429881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:37.503 [2024-07-25 12:04:34.429919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:37.503 [2024-07-25 12:04:34.429931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:37.503 [2024-07-25 12:04:34.429942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:37.503 [2024-07-25 12:04:34.429954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:37.503 [2024-07-25 12:04:34.429964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:37.503 [2024-07-25 12:04:34.429978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:37.503 [2024-07-25 12:04:34.429991] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:37.503 [2024-07-25 12:04:34.430005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:37.503 [2024-07-25 12:04:34.430019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:37.503 [2024-07-25 12:04:34.430031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:37.503 [2024-07-25 12:04:34.430042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:37.503 [2024-07-25 12:04:34.430054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:37.503 [2024-07-25 12:04:34.430065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:37.503 [2024-07-25 12:04:34.430076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:37.503 [2024-07-25 12:04:34.430088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:37.503 [2024-07-25 12:04:34.430099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:37.503 [2024-07-25 12:04:34.430111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:37.503 [2024-07-25 12:04:34.430124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:37.503 [2024-07-25 12:04:34.430136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:37.503 [2024-07-25 12:04:34.430147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:37.503 [2024-07-25 12:04:34.430159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:37.503 [2024-07-25 12:04:34.430172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:37.503 [2024-07-25 12:04:34.430183] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:37.503 [2024-07-25 12:04:34.430196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:37.503 [2024-07-25 12:04:34.430215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:37.503 [2024-07-25 12:04:34.430227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:37.503 [2024-07-25 12:04:34.430242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:37.503 [2024-07-25 12:04:34.430254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:37.503 [2024-07-25 12:04:34.430267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.430279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:37.503 [2024-07-25 12:04:34.430291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:35:37.503 [2024-07-25 12:04:34.430302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.478361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.478436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:37.503 [2024-07-25 12:04:34.478458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.970 ms 00:35:37.503 [2024-07-25 12:04:34.478471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.478598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.478616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:37.503 [2024-07-25 12:04:34.478629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:35:37.503 [2024-07-25 12:04:34.478641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.527041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.527125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:37.503 [2024-07-25 12:04:34.527155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.266 ms 00:35:37.503 [2024-07-25 12:04:34.527168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.503 [2024-07-25 12:04:34.527250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.503 [2024-07-25 12:04:34.527268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:37.504 [2024-07-25 12:04:34.527282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:37.504 [2024-07-25 12:04:34.527301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.504 [2024-07-25 12:04:34.527748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.504 [2024-07-25 12:04:34.527774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:37.504 [2024-07-25 12:04:34.527789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:35:37.504 [2024-07-25 12:04:34.527801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.504 [2024-07-25 12:04:34.527963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.504 [2024-07-25 12:04:34.527989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:37.504 [2024-07-25 12:04:34.528003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:35:37.504 [2024-07-25 12:04:34.528015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.761 [2024-07-25 12:04:34.545198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.761 [2024-07-25 12:04:34.545301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:37.761 [2024-07-25 12:04:34.545339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.141 ms 00:35:37.761 [2024-07-25 12:04:34.545372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.563803] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:37.762 [2024-07-25 12:04:34.563885] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:37.762 [2024-07-25 12:04:34.563909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.563922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:37.762 [2024-07-25 12:04:34.563938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.262 ms 00:35:37.762 [2024-07-25 12:04:34.563950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.596198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.596321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:37.762 [2024-07-25 12:04:34.596345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.142 ms 00:35:37.762 [2024-07-25 12:04:34.596358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.613251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.613345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:37.762 [2024-07-25 12:04:34.613367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.762 ms 00:35:37.762 [2024-07-25 12:04:34.613380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.630085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.630174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:37.762 [2024-07-25 12:04:34.630197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.617 ms 00:35:37.762 [2024-07-25 12:04:34.630209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.631260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.631303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:37.762 [2024-07-25 12:04:34.631321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:35:37.762 [2024-07-25 12:04:34.631333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.709487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.709572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:37.762 [2024-07-25 12:04:34.709597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.115 ms 00:35:37.762 [2024-07-25 12:04:34.709622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.722858] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:37.762 [2024-07-25 12:04:34.725651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.725718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:37.762 [2024-07-25 12:04:34.725740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.912 ms 00:35:37.762 [2024-07-25 12:04:34.725752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.725893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.725915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:37.762 [2024-07-25 12:04:34.725929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:37.762 [2024-07-25 12:04:34.725941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.727669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.727744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:37.762 [2024-07-25 12:04:34.727773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:35:37.762 [2024-07-25 12:04:34.727794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.727866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.727897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:37.762 [2024-07-25 12:04:34.727922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:37.762 [2024-07-25 12:04:34.727943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.728013] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:37.762 [2024-07-25 12:04:34.728044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.728076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:37.762 [2024-07-25 12:04:34.728098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:37.762 [2024-07-25 12:04:34.728119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.760710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.760826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:37.762 [2024-07-25 12:04:34.760851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.520 ms 00:35:37.762 [2024-07-25 12:04:34.760880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.761021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.762 [2024-07-25 12:04:34.761041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:37.762 [2024-07-25 12:04:34.761056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:35:37.762 [2024-07-25 12:04:34.761069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.762 [2024-07-25 12:04:34.769755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.267 ms, result 0 00:36:15.761  Copying: 780/1048576 [kB] (780 kBps) Copying: 3528/1048576 [kB] (2748 kBps) Copying: 20/1024 [MB] (17 MBps) Copying: 50/1024 [MB] (29 MBps) Copying: 80/1024 [MB] (29 MBps) Copying: 111/1024 [MB] (31 MBps) Copying: 141/1024 [MB] (30 MBps) Copying: 167/1024 [MB] (25 MBps) Copying: 195/1024 [MB] (28 MBps) Copying: 220/1024 [MB] (25 MBps) Copying: 246/1024 [MB] (26 MBps) Copying: 275/1024 [MB] (28 MBps) Copying: 303/1024 [MB] (27 MBps) Copying: 334/1024 [MB] (31 MBps) Copying: 363/1024 [MB] (28 MBps) Copying: 391/1024 [MB] (27 MBps) Copying: 418/1024 [MB] (27 MBps) Copying: 443/1024 [MB] (24 MBps) Copying: 470/1024 [MB] (26 MBps) Copying: 501/1024 [MB] (31 MBps) Copying: 528/1024 [MB] (27 MBps) Copying: 558/1024 [MB] (30 MBps) Copying: 588/1024 [MB] (29 MBps) Copying: 616/1024 [MB] (28 MBps) Copying: 644/1024 [MB] (27 MBps) Copying: 669/1024 [MB] (25 MBps) Copying: 699/1024 [MB] (29 MBps) Copying: 728/1024 [MB] (28 MBps) Copying: 755/1024 [MB] (27 MBps) Copying: 787/1024 [MB] (31 MBps) Copying: 818/1024 [MB] (31 MBps) Copying: 847/1024 [MB] (29 MBps) Copying: 879/1024 [MB] (31 MBps) Copying: 911/1024 [MB] (31 MBps) Copying: 942/1024 [MB] (31 MBps) Copying: 975/1024 [MB] (32 MBps) Copying: 1007/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-25 12:05:12.676430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.676790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:15.762 [2024-07-25 12:05:12.677256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:15.762 [2024-07-25 12:05:12.677425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.677529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:15.762 [2024-07-25 12:05:12.682683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.682887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:15.762 [2024-07-25 12:05:12.682926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.921 ms 00:36:15.762 [2024-07-25 12:05:12.682946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.683256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.683284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:15.762 [2024-07-25 12:05:12.683316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:36:15.762 [2024-07-25 12:05:12.683334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.696635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.696721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:15.762 [2024-07-25 12:05:12.696750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.266 ms 00:36:15.762 [2024-07-25 12:05:12.696770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.705365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.705415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:15.762 [2024-07-25 12:05:12.705439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.537 ms 00:36:15.762 [2024-07-25 12:05:12.705468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.736997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.737253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:15.762 [2024-07-25 12:05:12.737377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.453 ms 00:36:15.762 [2024-07-25 12:05:12.737505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.755668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.755913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:15.762 [2024-07-25 12:05:12.756052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.057 ms 00:36:15.762 [2024-07-25 12:05:12.756158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.758964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.759107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:15.762 [2024-07-25 12:05:12.759214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:36:15.762 [2024-07-25 12:05:12.759318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.762 [2024-07-25 12:05:12.791042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.762 [2024-07-25 12:05:12.791323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:36:15.762 [2024-07-25 12:05:12.791353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.653 ms 00:36:15.762 [2024-07-25 12:05:12.791366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.022 [2024-07-25 12:05:12.823917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:16.022 [2024-07-25 12:05:12.823997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:36:16.022 [2024-07-25 12:05:12.824017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.475 ms 00:36:16.022 [2024-07-25 12:05:12.824029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.022 [2024-07-25 12:05:12.854647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:16.022 [2024-07-25 12:05:12.854709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:16.022 [2024-07-25 12:05:12.854730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.557 ms 00:36:16.022 [2024-07-25 12:05:12.854759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.022 [2024-07-25 12:05:12.885346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:16.022 [2024-07-25 12:05:12.885389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:16.022 [2024-07-25 12:05:12.885407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.491 ms 00:36:16.022 [2024-07-25 12:05:12.885418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.022 [2024-07-25 12:05:12.885465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:16.022 [2024-07-25 12:05:12.885489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:16.022 [2024-07-25 12:05:12.885504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:36:16.022 [2024-07-25 12:05:12.885517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.885994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:16.022 [2024-07-25 12:05:12.886196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:16.023 [2024-07-25 12:05:12.886780] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:16.023 [2024-07-25 12:05:12.886791] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ca77a5-df83-4821-8402-2eecbfb8ae8b 00:36:16.023 [2024-07-25 12:05:12.886803] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:36:16.023 [2024-07-25 12:05:12.886820] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135872 00:36:16.023 [2024-07-25 12:05:12.886831] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133888 00:36:16.023 [2024-07-25 12:05:12.886843] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:36:16.023 [2024-07-25 12:05:12.886858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:16.023 [2024-07-25 12:05:12.886869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:16.023 [2024-07-25 12:05:12.886880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:16.023 [2024-07-25 12:05:12.886890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:16.023 [2024-07-25 12:05:12.886901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:16.023 [2024-07-25 12:05:12.886912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:16.023 [2024-07-25 12:05:12.886927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:16.023 [2024-07-25 12:05:12.886939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:36:16.023 [2024-07-25 12:05:12.886951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.023 [2024-07-25 12:05:12.903391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:16.023 [2024-07-25 12:05:12.903428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:16.023 [2024-07-25 12:05:12.903452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.373 ms 00:36:16.023 [2024-07-25 12:05:12.903487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.023 [2024-07-25 12:05:12.903944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:16.023 [2024-07-25 12:05:12.903962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:16.023 [2024-07-25 12:05:12.903976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:36:16.023 [2024-07-25 12:05:12.903987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.023 [2024-07-25 12:05:12.940686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.023 [2024-07-25 12:05:12.940751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:16.023 [2024-07-25 12:05:12.940768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.023 [2024-07-25 12:05:12.940780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.023 [2024-07-25 12:05:12.940874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.023 [2024-07-25 12:05:12.940889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:16.023 [2024-07-25 12:05:12.940902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.023 [2024-07-25 12:05:12.940913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.023 [2024-07-25 12:05:12.941009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.023 [2024-07-25 12:05:12.941034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:16.023 [2024-07-25 12:05:12.941047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.023 [2024-07-25 12:05:12.941058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.023 [2024-07-25 12:05:12.941082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.023 [2024-07-25 12:05:12.941096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:16.023 [2024-07-25 12:05:12.941108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.023 [2024-07-25 12:05:12.941119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.023 [2024-07-25 12:05:13.039824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.023 [2024-07-25 12:05:13.039890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:16.023 [2024-07-25 12:05:13.039907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.023 [2024-07-25 12:05:13.039919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.123430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.282 [2024-07-25 12:05:13.123503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:16.282 [2024-07-25 12:05:13.123522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.282 [2024-07-25 12:05:13.123546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.123649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.282 [2024-07-25 12:05:13.123667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:16.282 [2024-07-25 12:05:13.123718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.282 [2024-07-25 12:05:13.123733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.123786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.282 [2024-07-25 12:05:13.123801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:16.282 [2024-07-25 12:05:13.123814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.282 [2024-07-25 12:05:13.123825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.123946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.282 [2024-07-25 12:05:13.123964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:16.282 [2024-07-25 12:05:13.123977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.282 [2024-07-25 12:05:13.123995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.124048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.282 [2024-07-25 12:05:13.124066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:16.282 [2024-07-25 12:05:13.124079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.282 [2024-07-25 12:05:13.124090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.124156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.282 [2024-07-25 12:05:13.124177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:16.282 [2024-07-25 12:05:13.124191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.282 [2024-07-25 12:05:13.124202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.124271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:16.282 [2024-07-25 12:05:13.124288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:16.282 [2024-07-25 12:05:13.124301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:16.282 [2024-07-25 12:05:13.124312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:16.282 [2024-07-25 12:05:13.124449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 447.999 ms, result 0 00:36:17.215 00:36:17.215 00:36:17.215 12:05:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:19.791 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:19.791 12:05:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:19.791 [2024-07-25 12:05:16.497861] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:19.791 [2024-07-25 12:05:16.498020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84057 ] 00:36:19.791 [2024-07-25 12:05:16.672000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.049 [2024-07-25 12:05:16.901143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.307 [2024-07-25 12:05:17.214436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:20.307 [2024-07-25 12:05:17.214519] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:20.566 [2024-07-25 12:05:17.374727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.374790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:20.566 [2024-07-25 12:05:17.374812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:20.566 [2024-07-25 12:05:17.374825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.374891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.374910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:20.566 [2024-07-25 12:05:17.374923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:36:20.566 [2024-07-25 12:05:17.374939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.374974] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:20.566 [2024-07-25 12:05:17.375899] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:20.566 [2024-07-25 12:05:17.375944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.375959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:20.566 [2024-07-25 12:05:17.375972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:36:20.566 [2024-07-25 12:05:17.375984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.377125] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:20.566 [2024-07-25 12:05:17.393371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.393416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:20.566 [2024-07-25 12:05:17.393435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.248 ms 00:36:20.566 [2024-07-25 12:05:17.393448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.393522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.393544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:20.566 [2024-07-25 12:05:17.393557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:36:20.566 [2024-07-25 12:05:17.393569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.398009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.398056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:20.566 [2024-07-25 12:05:17.398073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.346 ms 00:36:20.566 [2024-07-25 12:05:17.398084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.398184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.398203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:20.566 [2024-07-25 12:05:17.398216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:36:20.566 [2024-07-25 12:05:17.398227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.398305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.398324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:20.566 [2024-07-25 12:05:17.398337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:36:20.566 [2024-07-25 12:05:17.398348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.398382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:20.566 [2024-07-25 12:05:17.402707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.402790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:20.566 [2024-07-25 12:05:17.402806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:36:20.566 [2024-07-25 12:05:17.402817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.402864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.566 [2024-07-25 12:05:17.402880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:20.566 [2024-07-25 12:05:17.402892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:36:20.566 [2024-07-25 12:05:17.402903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.566 [2024-07-25 12:05:17.402952] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:20.566 [2024-07-25 12:05:17.402983] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:20.566 [2024-07-25 12:05:17.403027] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:20.566 [2024-07-25 12:05:17.403050] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:36:20.566 [2024-07-25 12:05:17.403155] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:20.566 [2024-07-25 12:05:17.403170] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:20.566 [2024-07-25 12:05:17.403185] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:36:20.566 [2024-07-25 12:05:17.403200] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403213] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403225] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:20.567 [2024-07-25 12:05:17.403236] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:20.567 [2024-07-25 12:05:17.403246] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:20.567 [2024-07-25 12:05:17.403257] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:20.567 [2024-07-25 12:05:17.403269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.567 [2024-07-25 12:05:17.403285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:20.567 [2024-07-25 12:05:17.403296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:36:20.567 [2024-07-25 12:05:17.403307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.567 [2024-07-25 12:05:17.403404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.567 [2024-07-25 12:05:17.403418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:20.567 [2024-07-25 12:05:17.403432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:36:20.567 [2024-07-25 12:05:17.403444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.567 [2024-07-25 12:05:17.403560] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:20.567 [2024-07-25 12:05:17.403578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:20.567 [2024-07-25 12:05:17.403595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:20.567 [2024-07-25 12:05:17.403628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:20.567 [2024-07-25 12:05:17.403661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:20.567 [2024-07-25 12:05:17.403684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:20.567 [2024-07-25 12:05:17.403730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:20.567 [2024-07-25 12:05:17.403743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:20.567 [2024-07-25 12:05:17.403754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:20.567 [2024-07-25 12:05:17.403766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:20.567 [2024-07-25 12:05:17.403775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:20.567 [2024-07-25 12:05:17.403797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:20.567 [2024-07-25 12:05:17.403842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:20.567 [2024-07-25 12:05:17.403873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:20.567 [2024-07-25 12:05:17.403903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:20.567 [2024-07-25 12:05:17.403933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:20.567 [2024-07-25 12:05:17.403953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:20.567 [2024-07-25 12:05:17.403963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:20.567 [2024-07-25 12:05:17.403973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:20.567 [2024-07-25 12:05:17.403983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:20.567 [2024-07-25 12:05:17.403993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:20.567 [2024-07-25 12:05:17.404003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:20.567 [2024-07-25 12:05:17.404013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:20.567 [2024-07-25 12:05:17.404023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:20.567 [2024-07-25 12:05:17.404033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:20.567 [2024-07-25 12:05:17.404044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:20.567 [2024-07-25 12:05:17.404054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:20.567 [2024-07-25 12:05:17.404066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:20.567 [2024-07-25 12:05:17.404076] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:20.567 [2024-07-25 12:05:17.404087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:20.567 [2024-07-25 12:05:17.404098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:20.567 [2024-07-25 12:05:17.404108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:20.567 [2024-07-25 12:05:17.404120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:20.567 [2024-07-25 12:05:17.404130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:20.567 [2024-07-25 12:05:17.404141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:20.567 [2024-07-25 12:05:17.404151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:20.567 [2024-07-25 12:05:17.404161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:20.567 [2024-07-25 12:05:17.404171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:20.567 [2024-07-25 12:05:17.404183] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:20.567 [2024-07-25 12:05:17.404197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:20.567 [2024-07-25 12:05:17.404210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:20.567 [2024-07-25 12:05:17.404222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:20.567 [2024-07-25 12:05:17.404233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:20.567 [2024-07-25 12:05:17.404244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:20.567 [2024-07-25 12:05:17.404256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:20.567 [2024-07-25 12:05:17.404267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:20.567 [2024-07-25 12:05:17.404278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:20.567 [2024-07-25 12:05:17.404289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:20.567 [2024-07-25 12:05:17.404301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:20.567 [2024-07-25 12:05:17.404312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:20.567 [2024-07-25 12:05:17.404323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:20.567 [2024-07-25 12:05:17.404334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:20.567 [2024-07-25 12:05:17.404346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:20.567 [2024-07-25 12:05:17.404357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:20.567 [2024-07-25 12:05:17.404368] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:20.567 [2024-07-25 12:05:17.404381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:20.567 [2024-07-25 12:05:17.404398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:20.567 [2024-07-25 12:05:17.404411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:20.567 [2024-07-25 12:05:17.404422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:20.567 [2024-07-25 12:05:17.404434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:20.567 [2024-07-25 12:05:17.404446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.567 [2024-07-25 12:05:17.404458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:20.567 [2024-07-25 12:05:17.404469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:36:20.567 [2024-07-25 12:05:17.404480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.567 [2024-07-25 12:05:17.445683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.567 [2024-07-25 12:05:17.445777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:20.567 [2024-07-25 12:05:17.445799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.113 ms 00:36:20.567 [2024-07-25 12:05:17.445811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.567 [2024-07-25 12:05:17.445936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.567 [2024-07-25 12:05:17.445953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:20.568 [2024-07-25 12:05:17.445966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:36:20.568 [2024-07-25 12:05:17.445977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.484506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.484561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:20.568 [2024-07-25 12:05:17.484581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.430 ms 00:36:20.568 [2024-07-25 12:05:17.484593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.484665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.484682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:20.568 [2024-07-25 12:05:17.484715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:20.568 [2024-07-25 12:05:17.484734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.485169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.485189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:20.568 [2024-07-25 12:05:17.485203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:36:20.568 [2024-07-25 12:05:17.485213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.485368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.485387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:20.568 [2024-07-25 12:05:17.485399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:36:20.568 [2024-07-25 12:05:17.485411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.501571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.501628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:20.568 [2024-07-25 12:05:17.501648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:36:20.568 [2024-07-25 12:05:17.501666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.518153] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:20.568 [2024-07-25 12:05:17.518204] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:20.568 [2024-07-25 12:05:17.518225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.518237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:20.568 [2024-07-25 12:05:17.518251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.382 ms 00:36:20.568 [2024-07-25 12:05:17.518263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.548215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.548283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:20.568 [2024-07-25 12:05:17.548303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.883 ms 00:36:20.568 [2024-07-25 12:05:17.548315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.564207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.564255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:20.568 [2024-07-25 12:05:17.564272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.825 ms 00:36:20.568 [2024-07-25 12:05:17.564284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.579817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.579869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:20.568 [2024-07-25 12:05:17.579887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.486 ms 00:36:20.568 [2024-07-25 12:05:17.579899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.568 [2024-07-25 12:05:17.580739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.568 [2024-07-25 12:05:17.580776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:20.568 [2024-07-25 12:05:17.580792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:36:20.568 [2024-07-25 12:05:17.580804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.654034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.654105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:20.827 [2024-07-25 12:05:17.654127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.199 ms 00:36:20.827 [2024-07-25 12:05:17.654148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.668673] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:20.827 [2024-07-25 12:05:17.671820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.671866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:20.827 [2024-07-25 12:05:17.671886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.596 ms 00:36:20.827 [2024-07-25 12:05:17.671898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.672056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.672083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:20.827 [2024-07-25 12:05:17.672107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:20.827 [2024-07-25 12:05:17.672123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.672902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.672944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:20.827 [2024-07-25 12:05:17.672959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:36:20.827 [2024-07-25 12:05:17.672971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.673017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.673042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:20.827 [2024-07-25 12:05:17.673057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:20.827 [2024-07-25 12:05:17.673068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.673111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:20.827 [2024-07-25 12:05:17.673128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.673145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:20.827 [2024-07-25 12:05:17.673157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:36:20.827 [2024-07-25 12:05:17.673168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.706294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.706355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:20.827 [2024-07-25 12:05:17.706375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.099 ms 00:36:20.827 [2024-07-25 12:05:17.706396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.706490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.827 [2024-07-25 12:05:17.706511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:20.827 [2024-07-25 12:05:17.706524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:36:20.827 [2024-07-25 12:05:17.706536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.827 [2024-07-25 12:05:17.707900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.673 ms, result 0 00:36:59.432  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 135/1024 [MB] (26 MBps) Copying: 162/1024 [MB] (26 MBps) Copying: 190/1024 [MB] (28 MBps) Copying: 218/1024 [MB] (28 MBps) Copying: 244/1024 [MB] (25 MBps) Copying: 270/1024 [MB] (26 MBps) Copying: 298/1024 [MB] (28 MBps) Copying: 326/1024 [MB] (27 MBps) Copying: 354/1024 [MB] (28 MBps) Copying: 380/1024 [MB] (26 MBps) Copying: 407/1024 [MB] (26 MBps) Copying: 433/1024 [MB] (26 MBps) Copying: 461/1024 [MB] (27 MBps) Copying: 488/1024 [MB] (26 MBps) Copying: 516/1024 [MB] (28 MBps) Copying: 543/1024 [MB] (26 MBps) Copying: 571/1024 [MB] (28 MBps) Copying: 597/1024 [MB] (25 MBps) Copying: 625/1024 [MB] (28 MBps) Copying: 653/1024 [MB] (28 MBps) Copying: 681/1024 [MB] (27 MBps) Copying: 706/1024 [MB] (25 MBps) Copying: 733/1024 [MB] (26 MBps) Copying: 758/1024 [MB] (25 MBps) Copying: 783/1024 [MB] (24 MBps) Copying: 809/1024 [MB] (25 MBps) Copying: 833/1024 [MB] (24 MBps) Copying: 858/1024 [MB] (24 MBps) Copying: 885/1024 [MB] (27 MBps) Copying: 912/1024 [MB] (26 MBps) Copying: 937/1024 [MB] (24 MBps) Copying: 962/1024 [MB] (25 MBps) Copying: 987/1024 [MB] (25 MBps) Copying: 1013/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-25 12:05:56.427405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.432 [2024-07-25 12:05:56.427494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:59.432 [2024-07-25 12:05:56.427519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:59.432 [2024-07-25 12:05:56.427535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.432 [2024-07-25 12:05:56.427570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:59.432 [2024-07-25 12:05:56.432972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.432 [2024-07-25 12:05:56.433020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:59.432 [2024-07-25 12:05:56.433037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.376 ms 00:36:59.432 [2024-07-25 12:05:56.433056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.432 [2024-07-25 12:05:56.433303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.432 [2024-07-25 12:05:56.433320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:59.432 [2024-07-25 12:05:56.433333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:36:59.432 [2024-07-25 12:05:56.433344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.432 [2024-07-25 12:05:56.437073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.432 [2024-07-25 12:05:56.437118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:59.432 [2024-07-25 12:05:56.437136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:36:59.432 [2024-07-25 12:05:56.437148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.432 [2024-07-25 12:05:56.444343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.432 [2024-07-25 12:05:56.444391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:59.432 [2024-07-25 12:05:56.444409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.157 ms 00:36:59.432 [2024-07-25 12:05:56.444421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.480516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.692 [2024-07-25 12:05:56.480583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:59.692 [2024-07-25 12:05:56.480603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.011 ms 00:36:59.692 [2024-07-25 12:05:56.480614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.498534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.692 [2024-07-25 12:05:56.498592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:59.692 [2024-07-25 12:05:56.498612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.864 ms 00:36:59.692 [2024-07-25 12:05:56.498624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.501825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.692 [2024-07-25 12:05:56.501871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:59.692 [2024-07-25 12:05:56.501897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:36:59.692 [2024-07-25 12:05:56.501909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.533399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.692 [2024-07-25 12:05:56.533454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:36:59.692 [2024-07-25 12:05:56.533473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.466 ms 00:36:59.692 [2024-07-25 12:05:56.533485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.564675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.692 [2024-07-25 12:05:56.564735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:36:59.692 [2024-07-25 12:05:56.564754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.141 ms 00:36:59.692 [2024-07-25 12:05:56.564766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.595311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.692 [2024-07-25 12:05:56.595360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:59.692 [2024-07-25 12:05:56.595393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.492 ms 00:36:59.692 [2024-07-25 12:05:56.595404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.626463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.692 [2024-07-25 12:05:56.626521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:59.692 [2024-07-25 12:05:56.626540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.964 ms 00:36:59.692 [2024-07-25 12:05:56.626551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.692 [2024-07-25 12:05:56.626598] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:59.692 [2024-07-25 12:05:56.626622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:59.692 [2024-07-25 12:05:56.626636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:36:59.692 [2024-07-25 12:05:56.626649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:59.692 [2024-07-25 12:05:56.626661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:59.692 [2024-07-25 12:05:56.626673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:59.692 [2024-07-25 12:05:56.626685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:59.692 [2024-07-25 12:05:56.626756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.626993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:59.693 [2024-07-25 12:05:56.627839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:59.694 [2024-07-25 12:05:56.627851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:59.694 [2024-07-25 12:05:56.627862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:59.694 [2024-07-25 12:05:56.627874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:59.694 [2024-07-25 12:05:56.627885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:59.694 [2024-07-25 12:05:56.627905] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:59.694 [2024-07-25 12:05:56.627917] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ca77a5-df83-4821-8402-2eecbfb8ae8b 00:36:59.694 [2024-07-25 12:05:56.627938] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:36:59.694 [2024-07-25 12:05:56.627949] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:59.694 [2024-07-25 12:05:56.627960] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:59.694 [2024-07-25 12:05:56.627971] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:59.694 [2024-07-25 12:05:56.627981] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:59.694 [2024-07-25 12:05:56.627992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:59.694 [2024-07-25 12:05:56.628002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:59.694 [2024-07-25 12:05:56.628012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:59.694 [2024-07-25 12:05:56.628022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:59.694 [2024-07-25 12:05:56.628033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.694 [2024-07-25 12:05:56.628045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:59.694 [2024-07-25 12:05:56.628061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:36:59.694 [2024-07-25 12:05:56.628072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.694 [2024-07-25 12:05:56.644484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.694 [2024-07-25 12:05:56.644525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:59.694 [2024-07-25 12:05:56.644556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.368 ms 00:36:59.694 [2024-07-25 12:05:56.644567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.694 [2024-07-25 12:05:56.645012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:59.694 [2024-07-25 12:05:56.645043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:59.694 [2024-07-25 12:05:56.645056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:36:59.694 [2024-07-25 12:05:56.645074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.694 [2024-07-25 12:05:56.682091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.694 [2024-07-25 12:05:56.682154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:59.694 [2024-07-25 12:05:56.682172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.694 [2024-07-25 12:05:56.682183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.694 [2024-07-25 12:05:56.682276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.694 [2024-07-25 12:05:56.682293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:59.694 [2024-07-25 12:05:56.682307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.694 [2024-07-25 12:05:56.682324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.694 [2024-07-25 12:05:56.682436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.694 [2024-07-25 12:05:56.682455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:59.694 [2024-07-25 12:05:56.682468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.694 [2024-07-25 12:05:56.682480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.694 [2024-07-25 12:05:56.682502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.694 [2024-07-25 12:05:56.682515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:59.694 [2024-07-25 12:05:56.682526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.694 [2024-07-25 12:05:56.682536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.782633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.782718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:59.953 [2024-07-25 12:05:56.782739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.782751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.871955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.872026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:59.953 [2024-07-25 12:05:56.872046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.872066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.872173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.872191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:59.953 [2024-07-25 12:05:56.872203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.872214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.872260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.872274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:59.953 [2024-07-25 12:05:56.872286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.872297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.872418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.872438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:59.953 [2024-07-25 12:05:56.872450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.872461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.872508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.872525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:59.953 [2024-07-25 12:05:56.872537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.872549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.872597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.872612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:59.953 [2024-07-25 12:05:56.872624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.872635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.872686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:59.953 [2024-07-25 12:05:56.872727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:59.953 [2024-07-25 12:05:56.872746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:59.953 [2024-07-25 12:05:56.872757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:59.953 [2024-07-25 12:05:56.872899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 445.466 ms, result 0 00:37:01.329 00:37:01.330 00:37:01.330 12:05:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:37:03.234 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:37:03.234 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:37:03.234 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:37:03.234 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:03.234 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:37:03.492 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:37:03.751 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:03.751 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:37:03.751 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82216 00:37:03.751 12:06:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82216 ']' 00:37:03.751 12:06:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 82216 00:37:03.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82216) - No such process 00:37:03.751 Process with pid 82216 is not found 00:37:03.751 12:06:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 82216 is not found' 00:37:03.751 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:37:04.010 Remove shared memory files 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:37:04.010 ************************************ 00:37:04.010 END TEST ftl_dirty_shutdown 00:37:04.010 ************************************ 00:37:04.010 00:37:04.010 real 3m46.386s 00:37:04.010 user 4m17.638s 00:37:04.010 sys 0m38.547s 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:04.010 12:06:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:04.010 12:06:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:37:04.010 12:06:00 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:04.010 12:06:00 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:04.010 12:06:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:04.010 ************************************ 00:37:04.010 START TEST ftl_upgrade_shutdown 00:37:04.010 ************************************ 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:37:04.010 * Looking for test storage... 00:37:04.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:04.010 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:04.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84551 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84551 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84551 ']' 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:04.011 12:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:04.269 [2024-07-25 12:06:01.108866] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:04.269 [2024-07-25 12:06:01.109243] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84551 ] 00:37:04.269 [2024-07-25 12:06:01.280931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.528 [2024-07-25 12:06:01.477110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:37:05.463 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:37:05.722 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:05.981 { 00:37:05.981 "name": "basen1", 00:37:05.981 "aliases": [ 00:37:05.981 "90410efa-9baf-4c20-a2ec-20c6f923d969" 00:37:05.981 ], 00:37:05.981 "product_name": "NVMe disk", 00:37:05.981 "block_size": 4096, 00:37:05.981 "num_blocks": 1310720, 00:37:05.981 "uuid": "90410efa-9baf-4c20-a2ec-20c6f923d969", 00:37:05.981 "assigned_rate_limits": { 00:37:05.981 "rw_ios_per_sec": 0, 00:37:05.981 "rw_mbytes_per_sec": 0, 00:37:05.981 "r_mbytes_per_sec": 0, 00:37:05.981 "w_mbytes_per_sec": 0 00:37:05.981 }, 00:37:05.981 "claimed": true, 00:37:05.981 "claim_type": "read_many_write_one", 00:37:05.981 "zoned": false, 00:37:05.981 "supported_io_types": { 00:37:05.981 "read": true, 00:37:05.981 "write": true, 00:37:05.981 "unmap": true, 00:37:05.981 "flush": true, 00:37:05.981 "reset": true, 00:37:05.981 "nvme_admin": true, 00:37:05.981 "nvme_io": true, 00:37:05.981 "nvme_io_md": false, 00:37:05.981 "write_zeroes": true, 00:37:05.981 "zcopy": false, 00:37:05.981 "get_zone_info": false, 00:37:05.981 "zone_management": false, 00:37:05.981 "zone_append": false, 00:37:05.981 "compare": true, 00:37:05.981 "compare_and_write": false, 00:37:05.981 "abort": true, 00:37:05.981 "seek_hole": false, 00:37:05.981 "seek_data": false, 00:37:05.981 "copy": true, 00:37:05.981 "nvme_iov_md": false 00:37:05.981 }, 00:37:05.981 "driver_specific": { 00:37:05.981 "nvme": [ 00:37:05.981 { 00:37:05.981 "pci_address": "0000:00:11.0", 00:37:05.981 "trid": { 00:37:05.981 "trtype": "PCIe", 00:37:05.981 "traddr": "0000:00:11.0" 00:37:05.981 }, 00:37:05.981 "ctrlr_data": { 00:37:05.981 "cntlid": 0, 00:37:05.981 "vendor_id": "0x1b36", 00:37:05.981 "model_number": "QEMU NVMe Ctrl", 00:37:05.981 "serial_number": "12341", 00:37:05.981 "firmware_revision": "8.0.0", 00:37:05.981 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:05.981 "oacs": { 00:37:05.981 "security": 0, 00:37:05.981 "format": 1, 00:37:05.981 "firmware": 0, 00:37:05.981 "ns_manage": 1 00:37:05.981 }, 00:37:05.981 "multi_ctrlr": false, 00:37:05.981 "ana_reporting": false 00:37:05.981 }, 00:37:05.981 "vs": { 00:37:05.981 "nvme_version": "1.4" 00:37:05.981 }, 00:37:05.981 "ns_data": { 00:37:05.981 "id": 1, 00:37:05.981 "can_share": false 00:37:05.981 } 00:37:05.981 } 00:37:05.981 ], 00:37:05.981 "mp_policy": "active_passive" 00:37:05.981 } 00:37:05.981 } 00:37:05.981 ]' 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:05.981 12:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:06.547 12:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0c36a6ed-73ba-427a-9593-23f206854dbb 00:37:06.547 12:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:37:06.547 12:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c36a6ed-73ba-427a-9593-23f206854dbb 00:37:06.805 12:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:37:07.063 12:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=e4192db5-ca81-463d-8e68-fb55ca0caac1 00:37:07.063 12:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u e4192db5-ca81-463d-8e68-fb55ca0caac1 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5612a987-54ee-4020-a774-e2e7023f2413 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5612a987-54ee-4020-a774-e2e7023f2413 ]] 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5612a987-54ee-4020-a774-e2e7023f2413 5120 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5612a987-54ee-4020-a774-e2e7023f2413 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5612a987-54ee-4020-a774-e2e7023f2413 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5612a987-54ee-4020-a774-e2e7023f2413 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:37:07.321 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5612a987-54ee-4020-a774-e2e7023f2413 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:07.580 { 00:37:07.580 "name": "5612a987-54ee-4020-a774-e2e7023f2413", 00:37:07.580 "aliases": [ 00:37:07.580 "lvs/basen1p0" 00:37:07.580 ], 00:37:07.580 "product_name": "Logical Volume", 00:37:07.580 "block_size": 4096, 00:37:07.580 "num_blocks": 5242880, 00:37:07.580 "uuid": "5612a987-54ee-4020-a774-e2e7023f2413", 00:37:07.580 "assigned_rate_limits": { 00:37:07.580 "rw_ios_per_sec": 0, 00:37:07.580 "rw_mbytes_per_sec": 0, 00:37:07.580 "r_mbytes_per_sec": 0, 00:37:07.580 "w_mbytes_per_sec": 0 00:37:07.580 }, 00:37:07.580 "claimed": false, 00:37:07.580 "zoned": false, 00:37:07.580 "supported_io_types": { 00:37:07.580 "read": true, 00:37:07.580 "write": true, 00:37:07.580 "unmap": true, 00:37:07.580 "flush": false, 00:37:07.580 "reset": true, 00:37:07.580 "nvme_admin": false, 00:37:07.580 "nvme_io": false, 00:37:07.580 "nvme_io_md": false, 00:37:07.580 "write_zeroes": true, 00:37:07.580 "zcopy": false, 00:37:07.580 "get_zone_info": false, 00:37:07.580 "zone_management": false, 00:37:07.580 "zone_append": false, 00:37:07.580 "compare": false, 00:37:07.580 "compare_and_write": false, 00:37:07.580 "abort": false, 00:37:07.580 "seek_hole": true, 00:37:07.580 "seek_data": true, 00:37:07.580 "copy": false, 00:37:07.580 "nvme_iov_md": false 00:37:07.580 }, 00:37:07.580 "driver_specific": { 00:37:07.580 "lvol": { 00:37:07.580 "lvol_store_uuid": "e4192db5-ca81-463d-8e68-fb55ca0caac1", 00:37:07.580 "base_bdev": "basen1", 00:37:07.580 "thin_provision": true, 00:37:07.580 "num_allocated_clusters": 0, 00:37:07.580 "snapshot": false, 00:37:07.580 "clone": false, 00:37:07.580 "esnap_clone": false 00:37:07.580 } 00:37:07.580 } 00:37:07.580 } 00:37:07.580 ]' 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:37:07.580 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:37:07.839 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:37:07.839 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:37:07.839 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:37:08.098 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:37:08.098 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:37:08.098 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5612a987-54ee-4020-a774-e2e7023f2413 -c cachen1p0 --l2p_dram_limit 2 00:37:08.357 [2024-07-25 12:06:05.274204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.357 [2024-07-25 12:06:05.274316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:08.357 [2024-07-25 12:06:05.274342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:08.357 [2024-07-25 12:06:05.274358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.274453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.274476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:08.358 [2024-07-25 12:06:05.274491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:37:08.358 [2024-07-25 12:06:05.274505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.274536] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:08.358 [2024-07-25 12:06:05.275523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:08.358 [2024-07-25 12:06:05.275557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.275577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:08.358 [2024-07-25 12:06:05.275591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.028 ms 00:37:08.358 [2024-07-25 12:06:05.275605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.275747] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e688ba0b-26b9-47e6-b284-364529faa45a 00:37:08.358 [2024-07-25 12:06:05.276750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.276793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:37:08.358 [2024-07-25 12:06:05.276813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:37:08.358 [2024-07-25 12:06:05.276826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.281440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.281483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:08.358 [2024-07-25 12:06:05.281520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.548 ms 00:37:08.358 [2024-07-25 12:06:05.281532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.281600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.281620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:08.358 [2024-07-25 12:06:05.281635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:37:08.358 [2024-07-25 12:06:05.281646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.281753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.281774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:08.358 [2024-07-25 12:06:05.281793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:37:08.358 [2024-07-25 12:06:05.281806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.281844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:08.358 [2024-07-25 12:06:05.286662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.286857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:08.358 [2024-07-25 12:06:05.286988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.830 ms 00:37:08.358 [2024-07-25 12:06:05.287050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.287205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.287268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:08.358 [2024-07-25 12:06:05.287312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:08.358 [2024-07-25 12:06:05.287353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.287443] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:37:08.358 [2024-07-25 12:06:05.287647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:08.358 [2024-07-25 12:06:05.287841] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:08.358 [2024-07-25 12:06:05.287923] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:37:08.358 [2024-07-25 12:06:05.288088] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:08.358 [2024-07-25 12:06:05.288151] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:08.358 [2024-07-25 12:06:05.288210] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:08.358 [2024-07-25 12:06:05.288314] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:08.358 [2024-07-25 12:06:05.288370] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:08.358 [2024-07-25 12:06:05.288413] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:08.358 [2024-07-25 12:06:05.288454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.288495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:08.358 [2024-07-25 12:06:05.288604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.013 ms 00:37:08.358 [2024-07-25 12:06:05.288625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.288747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.358 [2024-07-25 12:06:05.288771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:08.358 [2024-07-25 12:06:05.288785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:37:08.358 [2024-07-25 12:06:05.288801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.358 [2024-07-25 12:06:05.288912] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:08.358 [2024-07-25 12:06:05.288942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:08.358 [2024-07-25 12:06:05.288956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:08.358 [2024-07-25 12:06:05.288970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.288983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:08.358 [2024-07-25 12:06:05.288995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:08.358 [2024-07-25 12:06:05.289034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:08.358 [2024-07-25 12:06:05.289045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:08.358 [2024-07-25 12:06:05.289058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:08.358 [2024-07-25 12:06:05.289082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:08.358 [2024-07-25 12:06:05.289093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:08.358 [2024-07-25 12:06:05.289119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:08.358 [2024-07-25 12:06:05.289132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:08.358 [2024-07-25 12:06:05.289157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:08.358 [2024-07-25 12:06:05.289168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:08.358 [2024-07-25 12:06:05.289192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:08.358 [2024-07-25 12:06:05.289205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:08.358 [2024-07-25 12:06:05.289216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:08.358 [2024-07-25 12:06:05.289228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:08.358 [2024-07-25 12:06:05.289239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:08.358 [2024-07-25 12:06:05.289252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:08.358 [2024-07-25 12:06:05.289262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:08.358 [2024-07-25 12:06:05.289275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:08.358 [2024-07-25 12:06:05.289286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:08.358 [2024-07-25 12:06:05.289298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:08.358 [2024-07-25 12:06:05.289309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:08.358 [2024-07-25 12:06:05.289322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:08.358 [2024-07-25 12:06:05.289334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:08.358 [2024-07-25 12:06:05.289350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:08.358 [2024-07-25 12:06:05.289374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:08.358 [2024-07-25 12:06:05.289384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:08.358 [2024-07-25 12:06:05.289409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:08.358 [2024-07-25 12:06:05.289451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:08.358 [2024-07-25 12:06:05.289462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289474] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:08.358 [2024-07-25 12:06:05.289486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:08.358 [2024-07-25 12:06:05.289500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:08.358 [2024-07-25 12:06:05.289512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:08.358 [2024-07-25 12:06:05.289526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:08.358 [2024-07-25 12:06:05.289537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:08.359 [2024-07-25 12:06:05.289552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:08.359 [2024-07-25 12:06:05.289564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:08.359 [2024-07-25 12:06:05.289576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:08.359 [2024-07-25 12:06:05.289587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:08.359 [2024-07-25 12:06:05.289605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:08.359 [2024-07-25 12:06:05.289622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:08.359 [2024-07-25 12:06:05.289650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:08.359 [2024-07-25 12:06:05.289702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:08.359 [2024-07-25 12:06:05.289716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:08.359 [2024-07-25 12:06:05.289731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:08.359 [2024-07-25 12:06:05.289743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:08.359 [2024-07-25 12:06:05.289842] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:08.359 [2024-07-25 12:06:05.289858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:08.359 [2024-07-25 12:06:05.289884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:08.359 [2024-07-25 12:06:05.289898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:08.359 [2024-07-25 12:06:05.289910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:08.359 [2024-07-25 12:06:05.289926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:08.359 [2024-07-25 12:06:05.289938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:08.359 [2024-07-25 12:06:05.289952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:37:08.359 [2024-07-25 12:06:05.289964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:08.359 [2024-07-25 12:06:05.290020] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:37:08.359 [2024-07-25 12:06:05.290039] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:37:10.282 [2024-07-25 12:06:07.265220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.282 [2024-07-25 12:06:07.265488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:37:10.282 [2024-07-25 12:06:07.265681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1975.203 ms 00:37:10.282 [2024-07-25 12:06:07.265838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.282 [2024-07-25 12:06:07.298822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.282 [2024-07-25 12:06:07.299082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:10.282 [2024-07-25 12:06:07.299242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.577 ms 00:37:10.282 [2024-07-25 12:06:07.299407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.282 [2024-07-25 12:06:07.299594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.282 [2024-07-25 12:06:07.299664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:10.282 [2024-07-25 12:06:07.299906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:37:10.282 [2024-07-25 12:06:07.299976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.339847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.340090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:10.541 [2024-07-25 12:06:07.340248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.670 ms 00:37:10.541 [2024-07-25 12:06:07.340412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.340530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.340616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:10.541 [2024-07-25 12:06:07.340777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:10.541 [2024-07-25 12:06:07.340918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.341368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.341510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:10.541 [2024-07-25 12:06:07.341646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:37:10.541 [2024-07-25 12:06:07.341784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.341901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.341964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:10.541 [2024-07-25 12:06:07.342084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:37:10.541 [2024-07-25 12:06:07.342253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.360143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.360355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:10.541 [2024-07-25 12:06:07.360485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.776 ms 00:37:10.541 [2024-07-25 12:06:07.360609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.374386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:10.541 [2024-07-25 12:06:07.375426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.375468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:10.541 [2024-07-25 12:06:07.375487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.632 ms 00:37:10.541 [2024-07-25 12:06:07.375502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.410291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.410382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:37:10.541 [2024-07-25 12:06:07.410405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.746 ms 00:37:10.541 [2024-07-25 12:06:07.410420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.410549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.410574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:10.541 [2024-07-25 12:06:07.410588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:37:10.541 [2024-07-25 12:06:07.410605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.441949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.442000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:37:10.541 [2024-07-25 12:06:07.442019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.220 ms 00:37:10.541 [2024-07-25 12:06:07.442039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.472961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.473031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:37:10.541 [2024-07-25 12:06:07.473051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.870 ms 00:37:10.541 [2024-07-25 12:06:07.473065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.473873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.473908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:10.541 [2024-07-25 12:06:07.473926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.755 ms 00:37:10.541 [2024-07-25 12:06:07.473940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.541 [2024-07-25 12:06:07.562245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.541 [2024-07-25 12:06:07.562323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:37:10.541 [2024-07-25 12:06:07.562346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 88.232 ms 00:37:10.541 [2024-07-25 12:06:07.562364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.801 [2024-07-25 12:06:07.594764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.801 [2024-07-25 12:06:07.594818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:37:10.801 [2024-07-25 12:06:07.594837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.340 ms 00:37:10.801 [2024-07-25 12:06:07.594852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.801 [2024-07-25 12:06:07.626136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.801 [2024-07-25 12:06:07.626187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:37:10.801 [2024-07-25 12:06:07.626227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.225 ms 00:37:10.801 [2024-07-25 12:06:07.626243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.801 [2024-07-25 12:06:07.657866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.801 [2024-07-25 12:06:07.657919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:10.801 [2024-07-25 12:06:07.657939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.571 ms 00:37:10.801 [2024-07-25 12:06:07.657954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.801 [2024-07-25 12:06:07.658010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.801 [2024-07-25 12:06:07.658034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:10.801 [2024-07-25 12:06:07.658049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:10.801 [2024-07-25 12:06:07.658065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.801 [2024-07-25 12:06:07.658183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.801 [2024-07-25 12:06:07.658221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:10.801 [2024-07-25 12:06:07.658237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:37:10.801 [2024-07-25 12:06:07.658251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.801 [2024-07-25 12:06:07.659315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2384.582 ms, result 0 00:37:10.801 { 00:37:10.801 "name": "ftl", 00:37:10.801 "uuid": "e688ba0b-26b9-47e6-b284-364529faa45a" 00:37:10.801 } 00:37:10.801 12:06:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:37:11.060 [2024-07-25 12:06:07.974593] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.060 12:06:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:37:11.319 12:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:37:11.577 [2024-07-25 12:06:08.535282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:11.577 12:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:37:11.835 [2024-07-25 12:06:08.812872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:11.835 12:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:37:12.403 Fill FTL, iteration 1 00:37:12.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84668 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84668 /var/tmp/spdk.tgt.sock 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84668 ']' 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:12.403 12:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:12.403 [2024-07-25 12:06:09.361556] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:12.403 [2024-07-25 12:06:09.361983] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84668 ] 00:37:12.661 [2024-07-25 12:06:09.538156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.920 [2024-07-25 12:06:09.780279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.487 12:06:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:13.487 12:06:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:37:13.487 12:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:37:14.055 ftln1 00:37:14.055 12:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:37:14.055 12:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84668 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84668 ']' 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84668 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84668 00:37:14.314 killing process with pid 84668 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84668' 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84668 00:37:14.314 12:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84668 00:37:16.215 12:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:37:16.215 12:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:37:16.528 [2024-07-25 12:06:13.291602] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:16.528 [2024-07-25 12:06:13.291765] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84721 ] 00:37:16.528 [2024-07-25 12:06:13.455397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.786 [2024-07-25 12:06:13.648186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.351  Copying: 202/1024 [MB] (202 MBps) Copying: 415/1024 [MB] (213 MBps) Copying: 632/1024 [MB] (217 MBps) Copying: 849/1024 [MB] (217 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:37:23.351 00:37:23.351 Calculate MD5 checksum, iteration 1 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:23.351 12:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:23.351 [2024-07-25 12:06:20.134267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:23.351 [2024-07-25 12:06:20.134732] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84791 ] 00:37:23.351 [2024-07-25 12:06:20.306504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.611 [2024-07-25 12:06:20.492016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.109  Copying: 500/1024 [MB] (500 MBps) Copying: 989/1024 [MB] (489 MBps) Copying: 1024/1024 [MB] (average 492 MBps) 00:37:27.109 00:37:27.109 12:06:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:37:27.109 12:06:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:29.633 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:29.633 Fill FTL, iteration 2 00:37:29.633 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a769af86981a2105804a9d097e15e46a 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:29.634 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:29.634 [2024-07-25 12:06:26.268346] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:29.634 [2024-07-25 12:06:26.268513] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84859 ] 00:37:29.634 [2024-07-25 12:06:26.439417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.895 [2024-07-25 12:06:26.732086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.468  Copying: 210/1024 [MB] (210 MBps) Copying: 421/1024 [MB] (211 MBps) Copying: 628/1024 [MB] (207 MBps) Copying: 841/1024 [MB] (213 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:37:36.468 00:37:36.468 Calculate MD5 checksum, iteration 2 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:36.468 12:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:36.468 [2024-07-25 12:06:33.254462] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:36.468 [2024-07-25 12:06:33.254615] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84930 ] 00:37:36.468 [2024-07-25 12:06:33.414587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.726 [2024-07-25 12:06:33.602363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.938  Copying: 502/1024 [MB] (502 MBps) Copying: 1024/1024 [MB] (average 514 MBps) 00:37:40.938 00:37:40.938 12:06:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:37:40.938 12:06:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:43.468 12:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:43.468 12:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=39ccb2cb037294f0848e83e134d52197 00:37:43.468 12:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:43.468 12:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:43.468 12:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:43.468 [2024-07-25 12:06:40.136560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:43.468 [2024-07-25 12:06:40.136837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:43.469 [2024-07-25 12:06:40.136976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:43.469 [2024-07-25 12:06:40.137046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:43.469 [2024-07-25 12:06:40.137216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:43.469 [2024-07-25 12:06:40.137335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:43.469 [2024-07-25 12:06:40.137453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:43.469 [2024-07-25 12:06:40.137567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:43.469 [2024-07-25 12:06:40.137663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:43.469 [2024-07-25 12:06:40.137744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:43.469 [2024-07-25 12:06:40.137789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:43.469 [2024-07-25 12:06:40.137827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:43.469 [2024-07-25 12:06:40.137945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.374 ms, result 0 00:37:43.469 true 00:37:43.469 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:43.469 { 00:37:43.469 "name": "ftl", 00:37:43.469 "properties": [ 00:37:43.469 { 00:37:43.469 "name": "superblock_version", 00:37:43.469 "value": 5, 00:37:43.469 "read-only": true 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "name": "base_device", 00:37:43.469 "bands": [ 00:37:43.469 { 00:37:43.469 "id": 0, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 1, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 2, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 3, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 4, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 5, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 6, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 7, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 8, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 9, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 10, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 11, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 12, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 13, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 14, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 15, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 16, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 17, 00:37:43.469 "state": "FREE", 00:37:43.469 "validity": 0.0 00:37:43.469 } 00:37:43.469 ], 00:37:43.469 "read-only": true 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "name": "cache_device", 00:37:43.469 "type": "bdev", 00:37:43.469 "chunks": [ 00:37:43.469 { 00:37:43.469 "id": 0, 00:37:43.469 "state": "INACTIVE", 00:37:43.469 "utilization": 0.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 1, 00:37:43.469 "state": "CLOSED", 00:37:43.469 "utilization": 1.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 2, 00:37:43.469 "state": "CLOSED", 00:37:43.469 "utilization": 1.0 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 3, 00:37:43.469 "state": "OPEN", 00:37:43.469 "utilization": 0.001953125 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "id": 4, 00:37:43.469 "state": "OPEN", 00:37:43.469 "utilization": 0.0 00:37:43.469 } 00:37:43.469 ], 00:37:43.469 "read-only": true 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "name": "verbose_mode", 00:37:43.469 "value": true, 00:37:43.469 "unit": "", 00:37:43.469 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:43.469 }, 00:37:43.469 { 00:37:43.469 "name": "prep_upgrade_on_shutdown", 00:37:43.469 "value": false, 00:37:43.469 "unit": "", 00:37:43.469 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:43.469 } 00:37:43.469 ] 00:37:43.469 } 00:37:43.469 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:37:43.728 [2024-07-25 12:06:40.664600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:43.728 [2024-07-25 12:06:40.664670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:43.728 [2024-07-25 12:06:40.664707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:43.728 [2024-07-25 12:06:40.664723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:43.728 [2024-07-25 12:06:40.664762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:43.728 [2024-07-25 12:06:40.664778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:43.728 [2024-07-25 12:06:40.664791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:43.728 [2024-07-25 12:06:40.664801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:43.728 [2024-07-25 12:06:40.664829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:43.728 [2024-07-25 12:06:40.664849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:43.728 [2024-07-25 12:06:40.664861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:43.728 [2024-07-25 12:06:40.664871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:43.728 [2024-07-25 12:06:40.664946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.336 ms, result 0 00:37:43.728 true 00:37:43.728 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:37:43.728 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:43.728 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:43.986 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:37:43.986 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:37:43.986 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:44.244 [2024-07-25 12:06:41.169170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:44.244 [2024-07-25 12:06:41.169423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:44.244 [2024-07-25 12:06:41.169556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:44.244 [2024-07-25 12:06:41.169685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:44.244 [2024-07-25 12:06:41.169803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:44.244 [2024-07-25 12:06:41.169863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:44.244 [2024-07-25 12:06:41.169992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:44.244 [2024-07-25 12:06:41.170045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:44.244 [2024-07-25 12:06:41.170148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:44.244 [2024-07-25 12:06:41.170231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:44.244 [2024-07-25 12:06:41.170273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:44.244 [2024-07-25 12:06:41.170309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:44.244 [2024-07-25 12:06:41.170498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.308 ms, result 0 00:37:44.244 true 00:37:44.244 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:44.502 { 00:37:44.503 "name": "ftl", 00:37:44.503 "properties": [ 00:37:44.503 { 00:37:44.503 "name": "superblock_version", 00:37:44.503 "value": 5, 00:37:44.503 "read-only": true 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "name": "base_device", 00:37:44.503 "bands": [ 00:37:44.503 { 00:37:44.503 "id": 0, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 1, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 2, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 3, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 4, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 5, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 6, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 7, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 8, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 9, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 10, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 11, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 12, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 13, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 14, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 15, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 16, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 17, 00:37:44.503 "state": "FREE", 00:37:44.503 "validity": 0.0 00:37:44.503 } 00:37:44.503 ], 00:37:44.503 "read-only": true 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "name": "cache_device", 00:37:44.503 "type": "bdev", 00:37:44.503 "chunks": [ 00:37:44.503 { 00:37:44.503 "id": 0, 00:37:44.503 "state": "INACTIVE", 00:37:44.503 "utilization": 0.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 1, 00:37:44.503 "state": "CLOSED", 00:37:44.503 "utilization": 1.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 2, 00:37:44.503 "state": "CLOSED", 00:37:44.503 "utilization": 1.0 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 3, 00:37:44.503 "state": "OPEN", 00:37:44.503 "utilization": 0.001953125 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "id": 4, 00:37:44.503 "state": "OPEN", 00:37:44.503 "utilization": 0.0 00:37:44.503 } 00:37:44.503 ], 00:37:44.503 "read-only": true 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "name": "verbose_mode", 00:37:44.503 "value": true, 00:37:44.503 "unit": "", 00:37:44.503 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:44.503 }, 00:37:44.503 { 00:37:44.503 "name": "prep_upgrade_on_shutdown", 00:37:44.503 "value": true, 00:37:44.503 "unit": "", 00:37:44.503 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:44.503 } 00:37:44.503 ] 00:37:44.503 } 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84551 ]] 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84551 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84551 ']' 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84551 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84551 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:44.503 killing process with pid 84551 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84551' 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84551 00:37:44.503 12:06:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84551 00:37:45.438 [2024-07-25 12:06:42.389585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:37:45.438 [2024-07-25 12:06:42.406148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:45.438 [2024-07-25 12:06:42.406208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:45.438 [2024-07-25 12:06:42.406229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:45.438 [2024-07-25 12:06:42.406241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:45.438 [2024-07-25 12:06:42.406271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:45.438 [2024-07-25 12:06:42.409613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:45.438 [2024-07-25 12:06:42.409654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:45.438 [2024-07-25 12:06:42.409677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.319 ms 00:37:45.438 [2024-07-25 12:06:42.409786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.886047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.886125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:55.473 [2024-07-25 12:06:50.886153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8476.282 ms 00:37:55.473 [2024-07-25 12:06:50.886177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.887440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.887483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:55.473 [2024-07-25 12:06:50.887500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.237 ms 00:37:55.473 [2024-07-25 12:06:50.887512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.888753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.888789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:37:55.473 [2024-07-25 12:06:50.888811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.197 ms 00:37:55.473 [2024-07-25 12:06:50.888822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.902015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.902063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:55.473 [2024-07-25 12:06:50.902089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.149 ms 00:37:55.473 [2024-07-25 12:06:50.902102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.910348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.910397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:55.473 [2024-07-25 12:06:50.910416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.198 ms 00:37:55.473 [2024-07-25 12:06:50.910428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.910556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.910591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:55.473 [2024-07-25 12:06:50.910614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.082 ms 00:37:55.473 [2024-07-25 12:06:50.910626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.923186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.923230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:37:55.473 [2024-07-25 12:06:50.923247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.535 ms 00:37:55.473 [2024-07-25 12:06:50.923258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.936389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.936445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:37:55.473 [2024-07-25 12:06:50.936464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.085 ms 00:37:55.473 [2024-07-25 12:06:50.936475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.949161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.949219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:55.473 [2024-07-25 12:06:50.949239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.634 ms 00:37:55.473 [2024-07-25 12:06:50.949250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.962020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.962085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:55.473 [2024-07-25 12:06:50.962110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.655 ms 00:37:55.473 [2024-07-25 12:06:50.962129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.962202] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:55.473 [2024-07-25 12:06:50.962240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:55.473 [2024-07-25 12:06:50.962266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:55.473 [2024-07-25 12:06:50.962288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:55.473 [2024-07-25 12:06:50.962309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:55.473 [2024-07-25 12:06:50.962604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:55.473 [2024-07-25 12:06:50.962624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e688ba0b-26b9-47e6-b284-364529faa45a 00:37:55.473 [2024-07-25 12:06:50.962639] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:55.473 [2024-07-25 12:06:50.962655] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:37:55.473 [2024-07-25 12:06:50.962674] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:37:55.473 [2024-07-25 12:06:50.962722] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:37:55.473 [2024-07-25 12:06:50.962745] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:55.473 [2024-07-25 12:06:50.962757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:55.473 [2024-07-25 12:06:50.962767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:55.473 [2024-07-25 12:06:50.962777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:55.473 [2024-07-25 12:06:50.962788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:55.473 [2024-07-25 12:06:50.962800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.473 [2024-07-25 12:06:50.962813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:55.473 [2024-07-25 12:06:50.962846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:37:55.473 [2024-07-25 12:06:50.962870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.473 [2024-07-25 12:06:50.981230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.474 [2024-07-25 12:06:50.981283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:55.474 [2024-07-25 12:06:50.981311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.298 ms 00:37:55.474 [2024-07-25 12:06:50.981323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:50.981832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.474 [2024-07-25 12:06:50.981861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:55.474 [2024-07-25 12:06:50.981876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.451 ms 00:37:55.474 [2024-07-25 12:06:50.981888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.034070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.034143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:55.474 [2024-07-25 12:06:51.034195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.034211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.034274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.034290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:55.474 [2024-07-25 12:06:51.034302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.034313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.034439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.034459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:55.474 [2024-07-25 12:06:51.034478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.034489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.034514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.034539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:55.474 [2024-07-25 12:06:51.034551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.034562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.134656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.134743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:55.474 [2024-07-25 12:06:51.134764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.134777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.218833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.218890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:55.474 [2024-07-25 12:06:51.218909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.218921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.219048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.219068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:55.474 [2024-07-25 12:06:51.219081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.219100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.219162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.219179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:55.474 [2024-07-25 12:06:51.219191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.219202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.219321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.219346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:55.474 [2024-07-25 12:06:51.219359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.219371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.219433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.219451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:55.474 [2024-07-25 12:06:51.219463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.219474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.219518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.219534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:55.474 [2024-07-25 12:06:51.219545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.219569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.219632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.474 [2024-07-25 12:06:51.219649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:55.474 [2024-07-25 12:06:51.219660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.474 [2024-07-25 12:06:51.219671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.474 [2024-07-25 12:06:51.219832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8813.733 ms, result 0 00:37:58.028 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:58.028 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:37:58.028 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:58.028 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:58.028 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:58.028 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85145 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85145 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85145 ']' 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:58.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:58.029 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:58.029 [2024-07-25 12:06:54.892028] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:58.029 [2024-07-25 12:06:54.892280] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85145 ] 00:37:58.287 [2024-07-25 12:06:55.093670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.546 [2024-07-25 12:06:55.322117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.112 [2024-07-25 12:06:56.120113] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:59.112 [2024-07-25 12:06:56.120192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:59.372 [2024-07-25 12:06:56.268385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.268448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:59.372 [2024-07-25 12:06:56.268471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:59.372 [2024-07-25 12:06:56.268483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.268560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.268581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:59.372 [2024-07-25 12:06:56.268595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:37:59.372 [2024-07-25 12:06:56.268607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.268648] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:59.372 [2024-07-25 12:06:56.269609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:59.372 [2024-07-25 12:06:56.269656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.269672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:59.372 [2024-07-25 12:06:56.269686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.019 ms 00:37:59.372 [2024-07-25 12:06:56.269726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.271000] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:59.372 [2024-07-25 12:06:56.287160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.287217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:59.372 [2024-07-25 12:06:56.287238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.161 ms 00:37:59.372 [2024-07-25 12:06:56.287251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.287344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.287365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:59.372 [2024-07-25 12:06:56.287379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:37:59.372 [2024-07-25 12:06:56.287392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.291901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.291955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:59.372 [2024-07-25 12:06:56.291972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.384 ms 00:37:59.372 [2024-07-25 12:06:56.291985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.292091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.292113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:59.372 [2024-07-25 12:06:56.292132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:37:59.372 [2024-07-25 12:06:56.292144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.292227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.292246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:59.372 [2024-07-25 12:06:56.292259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:37:59.372 [2024-07-25 12:06:56.292271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.292311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:59.372 [2024-07-25 12:06:56.296611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.296654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:59.372 [2024-07-25 12:06:56.296672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.310 ms 00:37:59.372 [2024-07-25 12:06:56.296684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.296749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.296768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:59.372 [2024-07-25 12:06:56.296787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:59.372 [2024-07-25 12:06:56.296799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.296854] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:59.372 [2024-07-25 12:06:56.296887] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:37:59.372 [2024-07-25 12:06:56.296932] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:59.372 [2024-07-25 12:06:56.296954] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:37:59.372 [2024-07-25 12:06:56.297062] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:59.372 [2024-07-25 12:06:56.297084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:59.372 [2024-07-25 12:06:56.297100] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:37:59.372 [2024-07-25 12:06:56.297116] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:59.372 [2024-07-25 12:06:56.297130] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:59.372 [2024-07-25 12:06:56.297143] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:59.372 [2024-07-25 12:06:56.297155] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:59.372 [2024-07-25 12:06:56.297167] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:59.372 [2024-07-25 12:06:56.297179] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:59.372 [2024-07-25 12:06:56.297192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.297205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:59.372 [2024-07-25 12:06:56.297218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:37:59.372 [2024-07-25 12:06:56.297235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.297336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.372 [2024-07-25 12:06:56.297358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:59.372 [2024-07-25 12:06:56.297371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:37:59.372 [2024-07-25 12:06:56.297383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.372 [2024-07-25 12:06:56.297526] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:59.372 [2024-07-25 12:06:56.297546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:59.372 [2024-07-25 12:06:56.297560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:59.372 [2024-07-25 12:06:56.297573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.372 [2024-07-25 12:06:56.297592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:59.372 [2024-07-25 12:06:56.297604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:59.372 [2024-07-25 12:06:56.297616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:59.372 [2024-07-25 12:06:56.297627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:59.372 [2024-07-25 12:06:56.297640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:59.372 [2024-07-25 12:06:56.297652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.297663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:59.373 [2024-07-25 12:06:56.297674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:59.373 [2024-07-25 12:06:56.297685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.297721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:59.373 [2024-07-25 12:06:56.297736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:59.373 [2024-07-25 12:06:56.297748] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.297775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:59.373 [2024-07-25 12:06:56.297788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:59.373 [2024-07-25 12:06:56.297800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.297811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:59.373 [2024-07-25 12:06:56.297827] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:59.373 [2024-07-25 12:06:56.297838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:59.373 [2024-07-25 12:06:56.297850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:59.373 [2024-07-25 12:06:56.297861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:59.373 [2024-07-25 12:06:56.297872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:59.373 [2024-07-25 12:06:56.297884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:59.373 [2024-07-25 12:06:56.297895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:59.373 [2024-07-25 12:06:56.297906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:59.373 [2024-07-25 12:06:56.297918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:59.373 [2024-07-25 12:06:56.297929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:59.373 [2024-07-25 12:06:56.297940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:59.373 [2024-07-25 12:06:56.297951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:59.373 [2024-07-25 12:06:56.297962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:59.373 [2024-07-25 12:06:56.297973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.297985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:59.373 [2024-07-25 12:06:56.297996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:59.373 [2024-07-25 12:06:56.298007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.298018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:59.373 [2024-07-25 12:06:56.298030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:59.373 [2024-07-25 12:06:56.298041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.298052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:59.373 [2024-07-25 12:06:56.298064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:59.373 [2024-07-25 12:06:56.298075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.298086] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:59.373 [2024-07-25 12:06:56.298098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:59.373 [2024-07-25 12:06:56.298110] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:59.373 [2024-07-25 12:06:56.298123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:59.373 [2024-07-25 12:06:56.298144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:59.373 [2024-07-25 12:06:56.298168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:59.373 [2024-07-25 12:06:56.298180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:59.373 [2024-07-25 12:06:56.298193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:59.373 [2024-07-25 12:06:56.298219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:59.373 [2024-07-25 12:06:56.298231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:59.373 [2024-07-25 12:06:56.298244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:59.373 [2024-07-25 12:06:56.298258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:59.373 [2024-07-25 12:06:56.298285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:59.373 [2024-07-25 12:06:56.298321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:59.373 [2024-07-25 12:06:56.298334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:59.373 [2024-07-25 12:06:56.298346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:59.373 [2024-07-25 12:06:56.298358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:59.373 [2024-07-25 12:06:56.298443] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:59.373 [2024-07-25 12:06:56.298457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:59.373 [2024-07-25 12:06:56.298483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:59.373 [2024-07-25 12:06:56.298495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:59.373 [2024-07-25 12:06:56.298508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:59.373 [2024-07-25 12:06:56.298521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.373 [2024-07-25 12:06:56.298534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:59.373 [2024-07-25 12:06:56.298546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.063 ms 00:37:59.373 [2024-07-25 12:06:56.298564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.373 [2024-07-25 12:06:56.298628] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:37:59.373 [2024-07-25 12:06:56.298648] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:38:01.273 [2024-07-25 12:06:58.207612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.207707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:38:01.273 [2024-07-25 12:06:58.207732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1908.996 ms 00:38:01.273 [2024-07-25 12:06:58.207758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.273 [2024-07-25 12:06:58.240220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.240322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:01.273 [2024-07-25 12:06:58.240351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.152 ms 00:38:01.273 [2024-07-25 12:06:58.240365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.273 [2024-07-25 12:06:58.240540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.240561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:38:01.273 [2024-07-25 12:06:58.240575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:38:01.273 [2024-07-25 12:06:58.240588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.273 [2024-07-25 12:06:58.279704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.279773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:01.273 [2024-07-25 12:06:58.279795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.038 ms 00:38:01.273 [2024-07-25 12:06:58.279808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.273 [2024-07-25 12:06:58.279904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.279922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:01.273 [2024-07-25 12:06:58.279937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:38:01.273 [2024-07-25 12:06:58.279949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.273 [2024-07-25 12:06:58.280367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.280387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:01.273 [2024-07-25 12:06:58.280401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:38:01.273 [2024-07-25 12:06:58.280414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.273 [2024-07-25 12:06:58.280474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.280491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:01.273 [2024-07-25 12:06:58.280505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:38:01.273 [2024-07-25 12:06:58.280517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.273 [2024-07-25 12:06:58.298183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.273 [2024-07-25 12:06:58.298263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:01.273 [2024-07-25 12:06:58.298286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.632 ms 00:38:01.273 [2024-07-25 12:06:58.298299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.315409] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:38:01.532 [2024-07-25 12:06:58.315506] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:38:01.532 [2024-07-25 12:06:58.315531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.315545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:38:01.532 [2024-07-25 12:06:58.315561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.016 ms 00:38:01.532 [2024-07-25 12:06:58.315573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.334314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.334409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:38:01.532 [2024-07-25 12:06:58.334431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.628 ms 00:38:01.532 [2024-07-25 12:06:58.334445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.350008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.350090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:38:01.532 [2024-07-25 12:06:58.350112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.478 ms 00:38:01.532 [2024-07-25 12:06:58.350125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.366283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.366354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:38:01.532 [2024-07-25 12:06:58.366375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.041 ms 00:38:01.532 [2024-07-25 12:06:58.366387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.367249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.367285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:38:01.532 [2024-07-25 12:06:58.367308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.703 ms 00:38:01.532 [2024-07-25 12:06:58.367320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.450617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.450732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:38:01.532 [2024-07-25 12:06:58.450757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 83.259 ms 00:38:01.532 [2024-07-25 12:06:58.450771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.464238] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:38:01.532 [2024-07-25 12:06:58.465363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.465405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:38:01.532 [2024-07-25 12:06:58.465434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.489 ms 00:38:01.532 [2024-07-25 12:06:58.465447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.465597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.465619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:38:01.532 [2024-07-25 12:06:58.465634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:38:01.532 [2024-07-25 12:06:58.465646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.465755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.465778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:38:01.532 [2024-07-25 12:06:58.465793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:38:01.532 [2024-07-25 12:06:58.465811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.465851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.465869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:38:01.532 [2024-07-25 12:06:58.465882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:38:01.532 [2024-07-25 12:06:58.465894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.465938] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:38:01.532 [2024-07-25 12:06:58.465956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.465969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:38:01.532 [2024-07-25 12:06:58.465987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:38:01.532 [2024-07-25 12:06:58.465999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.498091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.498363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:38:01.532 [2024-07-25 12:06:58.498497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.057 ms 00:38:01.532 [2024-07-25 12:06:58.498551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.498766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:01.532 [2024-07-25 12:06:58.498897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:38:01.532 [2024-07-25 12:06:58.499011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:38:01.532 [2024-07-25 12:06:58.499122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:01.532 [2024-07-25 12:06:58.500530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2231.528 ms, result 0 00:38:01.532 [2024-07-25 12:06:58.515040] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.532 [2024-07-25 12:06:58.531041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:38:01.532 [2024-07-25 12:06:58.539892] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:01.790 12:06:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:01.790 12:06:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:38:01.790 12:06:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:01.790 12:06:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:38:01.790 12:06:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:38:02.048 [2024-07-25 12:06:58.832083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:02.048 [2024-07-25 12:06:58.832150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:38:02.048 [2024-07-25 12:06:58.832172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:38:02.048 [2024-07-25 12:06:58.832185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:02.048 [2024-07-25 12:06:58.832222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:02.048 [2024-07-25 12:06:58.832239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:38:02.048 [2024-07-25 12:06:58.832253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:38:02.048 [2024-07-25 12:06:58.832264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:02.048 [2024-07-25 12:06:58.832292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:02.048 [2024-07-25 12:06:58.832308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:38:02.048 [2024-07-25 12:06:58.832321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:02.048 [2024-07-25 12:06:58.832340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:02.048 [2024-07-25 12:06:58.832418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.323 ms, result 0 00:38:02.048 true 00:38:02.048 12:06:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:02.306 { 00:38:02.306 "name": "ftl", 00:38:02.306 "properties": [ 00:38:02.306 { 00:38:02.306 "name": "superblock_version", 00:38:02.306 "value": 5, 00:38:02.306 "read-only": true 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "name": "base_device", 00:38:02.306 "bands": [ 00:38:02.306 { 00:38:02.306 "id": 0, 00:38:02.306 "state": "CLOSED", 00:38:02.306 "validity": 1.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 1, 00:38:02.306 "state": "CLOSED", 00:38:02.306 "validity": 1.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 2, 00:38:02.306 "state": "CLOSED", 00:38:02.306 "validity": 0.007843137254901933 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 3, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 4, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 5, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 6, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 7, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 8, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 9, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 10, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 11, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 12, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 13, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 14, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 15, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 16, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 17, 00:38:02.306 "state": "FREE", 00:38:02.306 "validity": 0.0 00:38:02.306 } 00:38:02.306 ], 00:38:02.306 "read-only": true 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "name": "cache_device", 00:38:02.306 "type": "bdev", 00:38:02.306 "chunks": [ 00:38:02.306 { 00:38:02.306 "id": 0, 00:38:02.306 "state": "INACTIVE", 00:38:02.306 "utilization": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 1, 00:38:02.306 "state": "OPEN", 00:38:02.306 "utilization": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 2, 00:38:02.306 "state": "OPEN", 00:38:02.306 "utilization": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 3, 00:38:02.306 "state": "FREE", 00:38:02.306 "utilization": 0.0 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "id": 4, 00:38:02.306 "state": "FREE", 00:38:02.306 "utilization": 0.0 00:38:02.306 } 00:38:02.306 ], 00:38:02.306 "read-only": true 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "name": "verbose_mode", 00:38:02.306 "value": true, 00:38:02.306 "unit": "", 00:38:02.306 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:38:02.306 }, 00:38:02.306 { 00:38:02.306 "name": "prep_upgrade_on_shutdown", 00:38:02.306 "value": false, 00:38:02.306 "unit": "", 00:38:02.306 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:38:02.306 } 00:38:02.306 ] 00:38:02.306 } 00:38:02.306 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:38:02.306 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:02.306 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:38:02.565 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:38:02.565 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:38:02.565 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:38:02.565 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:38:02.565 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:38:02.823 Validate MD5 checksum, iteration 1 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:02.823 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:02.823 [2024-07-25 12:06:59.741216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:02.823 [2024-07-25 12:06:59.741550] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85206 ] 00:38:03.080 [2024-07-25 12:06:59.902452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.080 [2024-07-25 12:07:00.096595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.569  Copying: 454/1024 [MB] (454 MBps) Copying: 893/1024 [MB] (439 MBps) Copying: 1024/1024 [MB] (average 445 MBps) 00:38:07.569 00:38:07.569 12:07:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:38:07.569 12:07:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a769af86981a2105804a9d097e15e46a 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a769af86981a2105804a9d097e15e46a != \a\7\6\9\a\f\8\6\9\8\1\a\2\1\0\5\8\0\4\a\9\d\0\9\7\e\1\5\e\4\6\a ]] 00:38:10.099 Validate MD5 checksum, iteration 2 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:10.099 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:10.099 [2024-07-25 12:07:06.803299] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:10.099 [2024-07-25 12:07:06.804602] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85279 ] 00:38:10.099 [2024-07-25 12:07:06.982977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.356 [2024-07-25 12:07:07.176608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.449  Copying: 468/1024 [MB] (468 MBps) Copying: 940/1024 [MB] (472 MBps) Copying: 1024/1024 [MB] (average 467 MBps) 00:38:14.449 00:38:14.449 12:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:38:14.450 12:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=39ccb2cb037294f0848e83e134d52197 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 39ccb2cb037294f0848e83e134d52197 != \3\9\c\c\b\2\c\b\0\3\7\2\9\4\f\0\8\4\8\e\8\3\e\1\3\4\d\5\2\1\9\7 ]] 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85145 ]] 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85145 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85353 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85353 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85353 ']' 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:16.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:16.978 12:07:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:16.978 [2024-07-25 12:07:13.704213] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:16.978 [2024-07-25 12:07:13.704386] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85353 ] 00:38:16.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 85145 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:38:16.978 [2024-07-25 12:07:13.868982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.235 [2024-07-25 12:07:14.096074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.168 [2024-07-25 12:07:14.884044] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:18.168 [2024-07-25 12:07:14.884132] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:18.168 [2024-07-25 12:07:15.032648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.032746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:38:18.168 [2024-07-25 12:07:15.032769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:18.168 [2024-07-25 12:07:15.032782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.168 [2024-07-25 12:07:15.032871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.032890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:18.168 [2024-07-25 12:07:15.032914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:38:18.168 [2024-07-25 12:07:15.032925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.168 [2024-07-25 12:07:15.032964] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:38:18.168 [2024-07-25 12:07:15.033955] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:38:18.168 [2024-07-25 12:07:15.034000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.034015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:18.168 [2024-07-25 12:07:15.034028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:38:18.168 [2024-07-25 12:07:15.034045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.168 [2024-07-25 12:07:15.034556] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:38:18.168 [2024-07-25 12:07:15.055868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.055953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:38:18.168 [2024-07-25 12:07:15.055988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.308 ms 00:38:18.168 [2024-07-25 12:07:15.056000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.168 [2024-07-25 12:07:15.068608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.068712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:38:18.168 [2024-07-25 12:07:15.068735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:38:18.168 [2024-07-25 12:07:15.068747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.168 [2024-07-25 12:07:15.069339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.069381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:18.168 [2024-07-25 12:07:15.069399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.429 ms 00:38:18.168 [2024-07-25 12:07:15.069411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.168 [2024-07-25 12:07:15.069502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.069521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:18.168 [2024-07-25 12:07:15.069534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:38:18.168 [2024-07-25 12:07:15.069545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.168 [2024-07-25 12:07:15.069590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.168 [2024-07-25 12:07:15.069606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:38:18.169 [2024-07-25 12:07:15.069623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:38:18.169 [2024-07-25 12:07:15.069634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.169 [2024-07-25 12:07:15.069672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:38:18.169 [2024-07-25 12:07:15.073993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.169 [2024-07-25 12:07:15.074051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:18.169 [2024-07-25 12:07:15.074068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.330 ms 00:38:18.169 [2024-07-25 12:07:15.074079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.169 [2024-07-25 12:07:15.074128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.169 [2024-07-25 12:07:15.074158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:38:18.169 [2024-07-25 12:07:15.074173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:38:18.169 [2024-07-25 12:07:15.074184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.169 [2024-07-25 12:07:15.074258] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:38:18.169 [2024-07-25 12:07:15.074291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:38:18.169 [2024-07-25 12:07:15.074350] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:38:18.169 [2024-07-25 12:07:15.074375] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:38:18.169 [2024-07-25 12:07:15.074483] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:38:18.169 [2024-07-25 12:07:15.074499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:38:18.169 [2024-07-25 12:07:15.074514] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:38:18.169 [2024-07-25 12:07:15.074528] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:38:18.169 [2024-07-25 12:07:15.074541] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:38:18.169 [2024-07-25 12:07:15.074553] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:38:18.169 [2024-07-25 12:07:15.074569] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:38:18.169 [2024-07-25 12:07:15.074581] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:38:18.169 [2024-07-25 12:07:15.074592] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:38:18.169 [2024-07-25 12:07:15.074604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.169 [2024-07-25 12:07:15.074619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:38:18.169 [2024-07-25 12:07:15.074631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.350 ms 00:38:18.169 [2024-07-25 12:07:15.074642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.169 [2024-07-25 12:07:15.074763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.169 [2024-07-25 12:07:15.074782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:38:18.169 [2024-07-25 12:07:15.074795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:38:18.169 [2024-07-25 12:07:15.074811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.169 [2024-07-25 12:07:15.074927] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:38:18.169 [2024-07-25 12:07:15.074944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:38:18.169 [2024-07-25 12:07:15.074956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:18.169 [2024-07-25 12:07:15.074968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.074979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:38:18.169 [2024-07-25 12:07:15.074990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:38:18.169 [2024-07-25 12:07:15.075010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:38:18.169 [2024-07-25 12:07:15.075020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:38:18.169 [2024-07-25 12:07:15.075030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:38:18.169 [2024-07-25 12:07:15.075050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:38:18.169 [2024-07-25 12:07:15.075060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:38:18.169 [2024-07-25 12:07:15.075080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:38:18.169 [2024-07-25 12:07:15.075090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:38:18.169 [2024-07-25 12:07:15.075110] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:38:18.169 [2024-07-25 12:07:15.075120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:38:18.169 [2024-07-25 12:07:15.075141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:38:18.169 [2024-07-25 12:07:15.075151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:18.169 [2024-07-25 12:07:15.075161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:38:18.169 [2024-07-25 12:07:15.075171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:38:18.169 [2024-07-25 12:07:15.075181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:18.169 [2024-07-25 12:07:15.075190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:38:18.169 [2024-07-25 12:07:15.075201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:38:18.169 [2024-07-25 12:07:15.075210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:18.169 [2024-07-25 12:07:15.075220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:38:18.169 [2024-07-25 12:07:15.075230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:38:18.169 [2024-07-25 12:07:15.075240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:18.169 [2024-07-25 12:07:15.075250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:38:18.169 [2024-07-25 12:07:15.075260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:38:18.169 [2024-07-25 12:07:15.075270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:38:18.169 [2024-07-25 12:07:15.075290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:38:18.169 [2024-07-25 12:07:15.075300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:38:18.169 [2024-07-25 12:07:15.075320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:38:18.169 [2024-07-25 12:07:15.075349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:38:18.169 [2024-07-25 12:07:15.075359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075368] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:38:18.169 [2024-07-25 12:07:15.075379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:38:18.169 [2024-07-25 12:07:15.075390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:18.169 [2024-07-25 12:07:15.075401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:18.169 [2024-07-25 12:07:15.075412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:38:18.169 [2024-07-25 12:07:15.075422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:38:18.169 [2024-07-25 12:07:15.075448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:38:18.169 [2024-07-25 12:07:15.075461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:38:18.169 [2024-07-25 12:07:15.075471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:38:18.169 [2024-07-25 12:07:15.075482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:38:18.169 [2024-07-25 12:07:15.075494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:38:18.169 [2024-07-25 12:07:15.075512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:38:18.169 [2024-07-25 12:07:15.075538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:38:18.169 [2024-07-25 12:07:15.075571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:38:18.169 [2024-07-25 12:07:15.075582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:38:18.169 [2024-07-25 12:07:15.075594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:38:18.169 [2024-07-25 12:07:15.075604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:38:18.169 [2024-07-25 12:07:15.075683] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:38:18.169 [2024-07-25 12:07:15.075711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:18.169 [2024-07-25 12:07:15.075736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:38:18.170 [2024-07-25 12:07:15.075748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:38:18.170 [2024-07-25 12:07:15.075770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:38:18.170 [2024-07-25 12:07:15.075782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.075794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:38:18.170 [2024-07-25 12:07:15.075805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:38:18.170 [2024-07-25 12:07:15.075816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.107917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.107992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:18.170 [2024-07-25 12:07:15.108012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.021 ms 00:38:18.170 [2024-07-25 12:07:15.108025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.108101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.108116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:38:18.170 [2024-07-25 12:07:15.108129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:38:18.170 [2024-07-25 12:07:15.108147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.147165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.147236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:18.170 [2024-07-25 12:07:15.147257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.913 ms 00:38:18.170 [2024-07-25 12:07:15.147268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.147360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.147377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:18.170 [2024-07-25 12:07:15.147391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:18.170 [2024-07-25 12:07:15.147402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.147597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.147617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:18.170 [2024-07-25 12:07:15.147631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:38:18.170 [2024-07-25 12:07:15.147642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.147725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.147749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:18.170 [2024-07-25 12:07:15.147762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:38:18.170 [2024-07-25 12:07:15.147773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.165526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.165601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:18.170 [2024-07-25 12:07:15.165636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.719 ms 00:38:18.170 [2024-07-25 12:07:15.165648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.165872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.165896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:38:18.170 [2024-07-25 12:07:15.165909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:38:18.170 [2024-07-25 12:07:15.165920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.170 [2024-07-25 12:07:15.195241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.170 [2024-07-25 12:07:15.195327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:38:18.170 [2024-07-25 12:07:15.195349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.279 ms 00:38:18.170 [2024-07-25 12:07:15.195362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.428 [2024-07-25 12:07:15.208231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.428 [2024-07-25 12:07:15.208290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:38:18.428 [2024-07-25 12:07:15.208309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:38:18.428 [2024-07-25 12:07:15.208320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.428 [2024-07-25 12:07:15.282271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.428 [2024-07-25 12:07:15.282354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:38:18.428 [2024-07-25 12:07:15.282384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.837 ms 00:38:18.428 [2024-07-25 12:07:15.282398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.428 [2024-07-25 12:07:15.282662] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:38:18.428 [2024-07-25 12:07:15.282831] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:38:18.428 [2024-07-25 12:07:15.282972] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:38:18.428 [2024-07-25 12:07:15.283107] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:38:18.428 [2024-07-25 12:07:15.283127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.428 [2024-07-25 12:07:15.283140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:38:18.428 [2024-07-25 12:07:15.283161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.639 ms 00:38:18.428 [2024-07-25 12:07:15.283172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.428 [2024-07-25 12:07:15.283309] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:38:18.428 [2024-07-25 12:07:15.283331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.428 [2024-07-25 12:07:15.283343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:38:18.428 [2024-07-25 12:07:15.283355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:38:18.428 [2024-07-25 12:07:15.283380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.428 [2024-07-25 12:07:15.303116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.428 [2024-07-25 12:07:15.303185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:38:18.428 [2024-07-25 12:07:15.303205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.700 ms 00:38:18.428 [2024-07-25 12:07:15.303217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.428 [2024-07-25 12:07:15.315527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:18.428 [2024-07-25 12:07:15.315606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:38:18.428 [2024-07-25 12:07:15.315627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:38:18.428 [2024-07-25 12:07:15.315644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:18.428 [2024-07-25 12:07:15.315921] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:38:18.996 [2024-07-25 12:07:15.797080] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:38:18.996 [2024-07-25 12:07:15.797294] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:38:19.254 [2024-07-25 12:07:16.269992] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:38:19.254 [2024-07-25 12:07:16.270133] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:38:19.254 [2024-07-25 12:07:16.270166] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:38:19.254 [2024-07-25 12:07:16.270184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.270197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:38:19.254 [2024-07-25 12:07:16.270213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 954.394 ms 00:38:19.254 [2024-07-25 12:07:16.270236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.270286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.270302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:38:19.254 [2024-07-25 12:07:16.270315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:19.254 [2024-07-25 12:07:16.270327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.282945] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:38:19.254 [2024-07-25 12:07:16.283110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.283135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:38:19.254 [2024-07-25 12:07:16.283151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.751 ms 00:38:19.254 [2024-07-25 12:07:16.283163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.283964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.284002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:38:19.254 [2024-07-25 12:07:16.284017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.654 ms 00:38:19.254 [2024-07-25 12:07:16.284029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.286542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.286574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:38:19.254 [2024-07-25 12:07:16.286588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.474 ms 00:38:19.254 [2024-07-25 12:07:16.286599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.286648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.286665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:38:19.254 [2024-07-25 12:07:16.286678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:19.254 [2024-07-25 12:07:16.286706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.286841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.286863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:38:19.254 [2024-07-25 12:07:16.286875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:38:19.254 [2024-07-25 12:07:16.286887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.286915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.286929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:38:19.254 [2024-07-25 12:07:16.286941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:19.254 [2024-07-25 12:07:16.286951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.286992] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:38:19.254 [2024-07-25 12:07:16.287009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.287020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:38:19.254 [2024-07-25 12:07:16.287036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:38:19.254 [2024-07-25 12:07:16.287047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.254 [2024-07-25 12:07:16.287108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:19.254 [2024-07-25 12:07:16.287124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:38:19.254 [2024-07-25 12:07:16.287136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:38:19.254 [2024-07-25 12:07:16.287146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:19.513 [2024-07-25 12:07:16.288376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1255.246 ms, result 0 00:38:19.513 [2024-07-25 12:07:16.303764] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.513 [2024-07-25 12:07:16.319783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:38:19.513 [2024-07-25 12:07:16.328658] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:19.513 Validate MD5 checksum, iteration 1 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:19.513 12:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:19.513 [2024-07-25 12:07:16.466040] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:19.514 [2024-07-25 12:07:16.466246] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85388 ] 00:38:19.772 [2024-07-25 12:07:16.644042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.030 [2024-07-25 12:07:16.841095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.819  Copying: 515/1024 [MB] (515 MBps) Copying: 1020/1024 [MB] (505 MBps) Copying: 1024/1024 [MB] (average 510 MBps) 00:38:25.819 00:38:26.079 12:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:38:26.079 12:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:28.609 Validate MD5 checksum, iteration 2 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a769af86981a2105804a9d097e15e46a 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a769af86981a2105804a9d097e15e46a != \a\7\6\9\a\f\8\6\9\8\1\a\2\1\0\5\8\0\4\a\9\d\0\9\7\e\1\5\e\4\6\a ]] 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:28.609 12:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:28.609 [2024-07-25 12:07:25.200925] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:28.609 [2024-07-25 12:07:25.201070] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85477 ] 00:38:28.609 [2024-07-25 12:07:25.385848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.609 [2024-07-25 12:07:25.591540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.351  Copying: 507/1024 [MB] (507 MBps) Copying: 1002/1024 [MB] (495 MBps) Copying: 1024/1024 [MB] (average 500 MBps) 00:38:33.351 00:38:33.351 12:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:38:33.351 12:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=39ccb2cb037294f0848e83e134d52197 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 39ccb2cb037294f0848e83e134d52197 != \3\9\c\c\b\2\c\b\0\3\7\2\9\4\f\0\8\4\8\e\8\3\e\1\3\4\d\5\2\1\9\7 ]] 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85353 ]] 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85353 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85353 ']' 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85353 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85353 00:38:35.879 killing process with pid 85353 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85353' 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85353 00:38:35.879 12:07:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85353 00:38:36.813 [2024-07-25 12:07:33.668794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:38:36.813 [2024-07-25 12:07:33.686185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.686251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:38:36.813 [2024-07-25 12:07:33.686272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:36.813 [2024-07-25 12:07:33.686284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.686317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:38:36.813 [2024-07-25 12:07:33.689649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.689685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:38:36.813 [2024-07-25 12:07:33.689713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.310 ms 00:38:36.813 [2024-07-25 12:07:33.689725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.689985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.690004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:38:36.813 [2024-07-25 12:07:33.690018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.233 ms 00:38:36.813 [2024-07-25 12:07:33.690029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.691274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.691313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:38:36.813 [2024-07-25 12:07:33.691328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.223 ms 00:38:36.813 [2024-07-25 12:07:33.691348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.692602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.692631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:38:36.813 [2024-07-25 12:07:33.692646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.211 ms 00:38:36.813 [2024-07-25 12:07:33.692657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.705632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.705730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:38:36.813 [2024-07-25 12:07:33.705770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.899 ms 00:38:36.813 [2024-07-25 12:07:33.705782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.712810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.712858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:38:36.813 [2024-07-25 12:07:33.712876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.955 ms 00:38:36.813 [2024-07-25 12:07:33.712888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.712980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.713009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:38:36.813 [2024-07-25 12:07:33.713023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:38:36.813 [2024-07-25 12:07:33.713039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.725374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.725417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:38:36.813 [2024-07-25 12:07:33.725433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.312 ms 00:38:36.813 [2024-07-25 12:07:33.725445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.737889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.737926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:38:36.813 [2024-07-25 12:07:33.737941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.401 ms 00:38:36.813 [2024-07-25 12:07:33.737952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.750564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.750637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:38:36.813 [2024-07-25 12:07:33.750656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.563 ms 00:38:36.813 [2024-07-25 12:07:33.750668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.763491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.763568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:38:36.813 [2024-07-25 12:07:33.763587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.695 ms 00:38:36.813 [2024-07-25 12:07:33.763598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.763662] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:38:36.813 [2024-07-25 12:07:33.763688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:36.813 [2024-07-25 12:07:33.763727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:38:36.813 [2024-07-25 12:07:33.763740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:38:36.813 [2024-07-25 12:07:33.763752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:36.813 [2024-07-25 12:07:33.763958] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:38:36.813 [2024-07-25 12:07:33.763970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e688ba0b-26b9-47e6-b284-364529faa45a 00:38:36.813 [2024-07-25 12:07:33.763982] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:38:36.813 [2024-07-25 12:07:33.763993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:38:36.813 [2024-07-25 12:07:33.764004] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:38:36.813 [2024-07-25 12:07:33.764015] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:38:36.813 [2024-07-25 12:07:33.764026] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:38:36.813 [2024-07-25 12:07:33.764037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:38:36.813 [2024-07-25 12:07:33.764054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:38:36.813 [2024-07-25 12:07:33.764064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:38:36.813 [2024-07-25 12:07:33.764074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:38:36.813 [2024-07-25 12:07:33.764088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.764100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:38:36.813 [2024-07-25 12:07:33.764113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.429 ms 00:38:36.813 [2024-07-25 12:07:33.764124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.781142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.781207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:38:36.813 [2024-07-25 12:07:33.781229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.982 ms 00:38:36.813 [2024-07-25 12:07:33.781255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.781797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:36.813 [2024-07-25 12:07:33.781823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:38:36.813 [2024-07-25 12:07:33.781838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.435 ms 00:38:36.813 [2024-07-25 12:07:33.781849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.833869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:36.813 [2024-07-25 12:07:33.833937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:36.813 [2024-07-25 12:07:33.833957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:36.813 [2024-07-25 12:07:33.833977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.834039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:36.813 [2024-07-25 12:07:33.834054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:36.813 [2024-07-25 12:07:33.834067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:36.813 [2024-07-25 12:07:33.834078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.834223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:36.813 [2024-07-25 12:07:33.834245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:36.813 [2024-07-25 12:07:33.834258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:36.813 [2024-07-25 12:07:33.834270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:36.813 [2024-07-25 12:07:33.834301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:36.813 [2024-07-25 12:07:33.834315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:36.813 [2024-07-25 12:07:33.834327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:36.813 [2024-07-25 12:07:33.834338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:33.933014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:33.933082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:37.071 [2024-07-25 12:07:33.933100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:33.933123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.018579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:34.018684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:37.071 [2024-07-25 12:07:34.018702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:34.018738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.018872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:34.018892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:37.071 [2024-07-25 12:07:34.018905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:34.018917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.018977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:34.019005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:37.071 [2024-07-25 12:07:34.019020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:34.019031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.019155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:34.019174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:37.071 [2024-07-25 12:07:34.019187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:34.019198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.019247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:34.019271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:38:37.071 [2024-07-25 12:07:34.019283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:34.019294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.019342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:34.019357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:37.071 [2024-07-25 12:07:34.019370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:34.019380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.019433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:37.071 [2024-07-25 12:07:34.019456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:37.071 [2024-07-25 12:07:34.019468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:37.071 [2024-07-25 12:07:34.019479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:37.071 [2024-07-25 12:07:34.019624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 333.407 ms, result 0 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:38.447 Remove shared memory files 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85145 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:38:38.447 00:38:38.447 real 1m34.298s 00:38:38.447 user 2m16.039s 00:38:38.447 sys 0m22.770s 00:38:38.447 ************************************ 00:38:38.447 END TEST ftl_upgrade_shutdown 00:38:38.447 ************************************ 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:38.447 12:07:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:38.447 12:07:35 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:38:38.447 12:07:35 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:38:38.447 12:07:35 ftl -- ftl/ftl.sh@14 -- # killprocess 77906 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@950 -- # '[' -z 77906 ']' 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@954 -- # kill -0 77906 00:38:38.447 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77906) - No such process 00:38:38.447 Process with pid 77906 is not found 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 77906 is not found' 00:38:38.447 12:07:35 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:38:38.447 12:07:35 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85617 00:38:38.447 12:07:35 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:38.447 12:07:35 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85617 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@831 -- # '[' -z 85617 ']' 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:38.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:38.447 12:07:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:38.447 [2024-07-25 12:07:35.358929] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:38.447 [2024-07-25 12:07:35.359103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85617 ] 00:38:38.707 [2024-07-25 12:07:35.529945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.965 [2024-07-25 12:07:35.757056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.545 12:07:36 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:39.545 12:07:36 ftl -- common/autotest_common.sh@864 -- # return 0 00:38:39.545 12:07:36 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:39.804 nvme0n1 00:38:39.804 12:07:36 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:38:39.804 12:07:36 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:39.804 12:07:36 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:40.062 12:07:37 ftl -- ftl/common.sh@28 -- # stores=e4192db5-ca81-463d-8e68-fb55ca0caac1 00:38:40.062 12:07:37 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:38:40.062 12:07:37 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4192db5-ca81-463d-8e68-fb55ca0caac1 00:38:40.321 12:07:37 ftl -- ftl/ftl.sh@23 -- # killprocess 85617 00:38:40.321 12:07:37 ftl -- common/autotest_common.sh@950 -- # '[' -z 85617 ']' 00:38:40.321 12:07:37 ftl -- common/autotest_common.sh@954 -- # kill -0 85617 00:38:40.321 12:07:37 ftl -- common/autotest_common.sh@955 -- # uname 00:38:40.321 12:07:37 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:40.321 12:07:37 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85617 00:38:40.579 12:07:37 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:40.579 12:07:37 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:40.579 killing process with pid 85617 00:38:40.579 12:07:37 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85617' 00:38:40.579 12:07:37 ftl -- common/autotest_common.sh@969 -- # kill 85617 00:38:40.579 12:07:37 ftl -- common/autotest_common.sh@974 -- # wait 85617 00:38:42.508 12:07:39 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:42.766 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:42.766 Waiting for block devices as requested 00:38:42.766 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:42.766 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:43.025 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:43.025 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:48.288 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:48.288 12:07:45 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:38:48.288 Remove shared memory files 00:38:48.288 12:07:45 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:48.288 12:07:45 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:38:48.288 12:07:45 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:38:48.288 12:07:45 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:38:48.288 12:07:45 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:48.288 12:07:45 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:38:48.288 00:38:48.288 real 11m40.453s 00:38:48.288 user 14m37.156s 00:38:48.288 sys 1m31.279s 00:38:48.288 12:07:45 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:48.288 12:07:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:48.288 ************************************ 00:38:48.288 END TEST ftl 00:38:48.288 ************************************ 00:38:48.288 12:07:45 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:38:48.288 12:07:45 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:48.288 12:07:45 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:38:48.288 12:07:45 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:38:48.288 12:07:45 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:38:48.288 12:07:45 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:38:48.288 12:07:45 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:38:48.288 12:07:45 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:38:48.288 12:07:45 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:38:48.288 12:07:45 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:38:48.288 12:07:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:48.288 12:07:45 -- common/autotest_common.sh@10 -- # set +x 00:38:48.288 12:07:45 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:38:48.288 12:07:45 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:38:48.288 12:07:45 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:38:48.288 12:07:45 -- common/autotest_common.sh@10 -- # set +x 00:38:49.661 INFO: APP EXITING 00:38:49.661 INFO: killing all VMs 00:38:49.661 INFO: killing vhost app 00:38:49.661 INFO: EXIT DONE 00:38:49.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:50.227 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:50.227 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:50.227 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:50.227 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:50.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:51.080 Cleaning 00:38:51.080 Removing: /var/run/dpdk/spdk0/config 00:38:51.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:51.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:51.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:51.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:51.080 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:51.080 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:51.080 Removing: /var/run/dpdk/spdk0 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62001 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62222 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62438 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62542 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62598 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62726 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62744 00:38:51.080 Removing: /var/run/dpdk/spdk_pid62930 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63022 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63121 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63229 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63324 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63371 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63413 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63475 00:38:51.080 Removing: /var/run/dpdk/spdk_pid63565 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64030 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64100 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64175 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64191 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64326 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64342 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64485 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64501 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64569 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64594 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64647 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64676 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64845 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64887 00:38:51.080 Removing: /var/run/dpdk/spdk_pid64962 00:38:51.080 Removing: /var/run/dpdk/spdk_pid65135 00:38:51.080 Removing: /var/run/dpdk/spdk_pid65224 00:38:51.080 Removing: /var/run/dpdk/spdk_pid65272 00:38:51.080 Removing: /var/run/dpdk/spdk_pid65734 00:38:51.080 Removing: /var/run/dpdk/spdk_pid65837 00:38:51.080 Removing: /var/run/dpdk/spdk_pid65952 00:38:51.080 Removing: /var/run/dpdk/spdk_pid66010 00:38:51.080 Removing: /var/run/dpdk/spdk_pid66036 00:38:51.080 Removing: /var/run/dpdk/spdk_pid66112 00:38:51.080 Removing: /var/run/dpdk/spdk_pid66743 00:38:51.080 Removing: /var/run/dpdk/spdk_pid66785 00:38:51.080 Removing: /var/run/dpdk/spdk_pid67294 00:38:51.080 Removing: /var/run/dpdk/spdk_pid67398 00:38:51.080 Removing: /var/run/dpdk/spdk_pid67518 00:38:51.080 Removing: /var/run/dpdk/spdk_pid67571 00:38:51.080 Removing: /var/run/dpdk/spdk_pid67601 00:38:51.080 Removing: /var/run/dpdk/spdk_pid67628 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69482 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69626 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69630 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69642 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69687 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69691 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69703 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69748 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69752 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69764 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69815 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69819 00:38:51.080 Removing: /var/run/dpdk/spdk_pid69831 00:38:51.080 Removing: /var/run/dpdk/spdk_pid71183 00:38:51.080 Removing: /var/run/dpdk/spdk_pid71285 00:38:51.080 Removing: /var/run/dpdk/spdk_pid72687 00:38:51.080 Removing: /var/run/dpdk/spdk_pid74036 00:38:51.080 Removing: /var/run/dpdk/spdk_pid74152 00:38:51.081 Removing: /var/run/dpdk/spdk_pid74272 00:38:51.081 Removing: /var/run/dpdk/spdk_pid74391 00:38:51.081 Removing: /var/run/dpdk/spdk_pid74535 00:38:51.081 Removing: /var/run/dpdk/spdk_pid74615 00:38:51.081 Removing: /var/run/dpdk/spdk_pid74755 00:38:51.081 Removing: /var/run/dpdk/spdk_pid75118 00:38:51.081 Removing: /var/run/dpdk/spdk_pid75160 00:38:51.081 Removing: /var/run/dpdk/spdk_pid75637 00:38:51.081 Removing: /var/run/dpdk/spdk_pid75825 00:38:51.081 Removing: /var/run/dpdk/spdk_pid75926 00:38:51.081 Removing: /var/run/dpdk/spdk_pid76041 00:38:51.081 Removing: /var/run/dpdk/spdk_pid76098 00:38:51.081 Removing: /var/run/dpdk/spdk_pid76129 00:38:51.081 Removing: /var/run/dpdk/spdk_pid76435 00:38:51.081 Removing: /var/run/dpdk/spdk_pid76512 00:38:51.081 Removing: /var/run/dpdk/spdk_pid76594 00:38:51.081 Removing: /var/run/dpdk/spdk_pid76982 00:38:51.081 Removing: /var/run/dpdk/spdk_pid77124 00:38:51.081 Removing: /var/run/dpdk/spdk_pid77906 00:38:51.081 Removing: /var/run/dpdk/spdk_pid78046 00:38:51.081 Removing: /var/run/dpdk/spdk_pid78267 00:38:51.081 Removing: /var/run/dpdk/spdk_pid78364 00:38:51.081 Removing: /var/run/dpdk/spdk_pid78719 00:38:51.081 Removing: /var/run/dpdk/spdk_pid78990 00:38:51.081 Removing: /var/run/dpdk/spdk_pid79349 00:38:51.081 Removing: /var/run/dpdk/spdk_pid79544 00:38:51.081 Removing: /var/run/dpdk/spdk_pid79680 00:38:51.081 Removing: /var/run/dpdk/spdk_pid79744 00:38:51.081 Removing: /var/run/dpdk/spdk_pid79886 00:38:51.081 Removing: /var/run/dpdk/spdk_pid79918 00:38:51.081 Removing: /var/run/dpdk/spdk_pid79982 00:38:51.081 Removing: /var/run/dpdk/spdk_pid80179 00:38:51.081 Removing: /var/run/dpdk/spdk_pid80415 00:38:51.081 Removing: /var/run/dpdk/spdk_pid80827 00:38:51.339 Removing: /var/run/dpdk/spdk_pid81261 00:38:51.340 Removing: /var/run/dpdk/spdk_pid81697 00:38:51.340 Removing: /var/run/dpdk/spdk_pid82216 00:38:51.340 Removing: /var/run/dpdk/spdk_pid82358 00:38:51.340 Removing: /var/run/dpdk/spdk_pid82460 00:38:51.340 Removing: /var/run/dpdk/spdk_pid83110 00:38:51.340 Removing: /var/run/dpdk/spdk_pid83196 00:38:51.340 Removing: /var/run/dpdk/spdk_pid83636 00:38:51.340 Removing: /var/run/dpdk/spdk_pid84057 00:38:51.340 Removing: /var/run/dpdk/spdk_pid84551 00:38:51.340 Removing: /var/run/dpdk/spdk_pid84668 00:38:51.340 Removing: /var/run/dpdk/spdk_pid84721 00:38:51.340 Removing: /var/run/dpdk/spdk_pid84791 00:38:51.340 Removing: /var/run/dpdk/spdk_pid84859 00:38:51.340 Removing: /var/run/dpdk/spdk_pid84930 00:38:51.340 Removing: /var/run/dpdk/spdk_pid85145 00:38:51.340 Removing: /var/run/dpdk/spdk_pid85206 00:38:51.340 Removing: /var/run/dpdk/spdk_pid85279 00:38:51.340 Removing: /var/run/dpdk/spdk_pid85353 00:38:51.340 Removing: /var/run/dpdk/spdk_pid85388 00:38:51.340 Removing: /var/run/dpdk/spdk_pid85477 00:38:51.340 Removing: /var/run/dpdk/spdk_pid85617 00:38:51.340 Clean 00:38:51.340 12:07:48 -- common/autotest_common.sh@1451 -- # return 0 00:38:51.340 12:07:48 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:38:51.340 12:07:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:51.340 12:07:48 -- common/autotest_common.sh@10 -- # set +x 00:38:51.340 12:07:48 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:38:51.340 12:07:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:51.340 12:07:48 -- common/autotest_common.sh@10 -- # set +x 00:38:51.340 12:07:48 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:51.340 12:07:48 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:51.340 12:07:48 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:51.340 12:07:48 -- spdk/autotest.sh@395 -- # hash lcov 00:38:51.340 12:07:48 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:51.340 12:07:48 -- spdk/autotest.sh@397 -- # hostname 00:38:51.340 12:07:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:51.598 geninfo: WARNING: invalid characters removed from testname! 00:39:23.684 12:08:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:23.684 12:08:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:26.967 12:08:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:29.497 12:08:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:32.027 12:08:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:35.309 12:08:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:37.835 12:08:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:37.835 12:08:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:37.835 12:08:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:37.835 12:08:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.835 12:08:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.835 12:08:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.835 12:08:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.835 12:08:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.835 12:08:34 -- paths/export.sh@5 -- $ export PATH 00:39:37.835 12:08:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.835 12:08:34 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:39:37.835 12:08:34 -- common/autobuild_common.sh@447 -- $ date +%s 00:39:37.835 12:08:34 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721909314.XXXXXX 00:39:37.835 12:08:34 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721909314.WHQJix 00:39:37.835 12:08:34 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:39:37.835 12:08:34 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:39:37.835 12:08:34 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:39:37.835 12:08:34 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:39:37.835 12:08:34 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:39:37.835 12:08:34 -- common/autobuild_common.sh@463 -- $ get_config_params 00:39:37.835 12:08:34 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:39:37.835 12:08:34 -- common/autotest_common.sh@10 -- $ set +x 00:39:37.835 12:08:34 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:39:37.835 12:08:34 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:39:37.835 12:08:34 -- pm/common@17 -- $ local monitor 00:39:37.835 12:08:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:37.835 12:08:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:37.835 12:08:34 -- pm/common@25 -- $ sleep 1 00:39:37.835 12:08:34 -- pm/common@21 -- $ date +%s 00:39:37.835 12:08:34 -- pm/common@21 -- $ date +%s 00:39:37.835 12:08:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721909314 00:39:37.835 12:08:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721909314 00:39:37.835 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721909314_collect-vmstat.pm.log 00:39:37.835 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721909314_collect-cpu-load.pm.log 00:39:38.769 12:08:35 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:39:38.769 12:08:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:39:38.769 12:08:35 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:39:38.769 12:08:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:38.769 12:08:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:38.769 12:08:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:38.769 12:08:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:38.769 12:08:35 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:38.769 12:08:35 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:38.769 12:08:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:38.769 12:08:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:38.769 12:08:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:38.769 12:08:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:38.769 12:08:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:38.769 12:08:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:39:38.769 12:08:35 -- pm/common@44 -- $ pid=87299 00:39:38.769 12:08:35 -- pm/common@50 -- $ kill -TERM 87299 00:39:38.769 12:08:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:38.769 12:08:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:39:38.769 12:08:35 -- pm/common@44 -- $ pid=87300 00:39:38.769 12:08:35 -- pm/common@50 -- $ kill -TERM 87300 00:39:38.769 + [[ -n 5206 ]] 00:39:38.769 + sudo kill 5206 00:39:38.776 [Pipeline] } 00:39:38.791 [Pipeline] // timeout 00:39:38.795 [Pipeline] } 00:39:38.807 [Pipeline] // stage 00:39:38.812 [Pipeline] } 00:39:38.824 [Pipeline] // catchError 00:39:38.830 [Pipeline] stage 00:39:38.832 [Pipeline] { (Stop VM) 00:39:38.841 [Pipeline] sh 00:39:39.109 + vagrant halt 00:39:43.289 ==> default: Halting domain... 00:39:48.561 [Pipeline] sh 00:39:48.837 + vagrant destroy -f 00:39:53.054 ==> default: Removing domain... 00:39:53.632 [Pipeline] sh 00:39:53.913 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:39:53.923 [Pipeline] } 00:39:53.942 [Pipeline] // stage 00:39:53.949 [Pipeline] } 00:39:53.968 [Pipeline] // dir 00:39:53.974 [Pipeline] } 00:39:53.992 [Pipeline] // wrap 00:39:53.998 [Pipeline] } 00:39:54.014 [Pipeline] // catchError 00:39:54.023 [Pipeline] stage 00:39:54.026 [Pipeline] { (Epilogue) 00:39:54.039 [Pipeline] sh 00:39:54.320 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:02.441 [Pipeline] catchError 00:40:02.443 [Pipeline] { 00:40:02.460 [Pipeline] sh 00:40:02.739 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:02.997 Artifacts sizes are good 00:40:03.006 [Pipeline] } 00:40:03.025 [Pipeline] // catchError 00:40:03.038 [Pipeline] archiveArtifacts 00:40:03.046 Archiving artifacts 00:40:03.248 [Pipeline] cleanWs 00:40:03.259 [WS-CLEANUP] Deleting project workspace... 00:40:03.259 [WS-CLEANUP] Deferred wipeout is used... 00:40:03.264 [WS-CLEANUP] done 00:40:03.266 [Pipeline] } 00:40:03.284 [Pipeline] // stage 00:40:03.290 [Pipeline] } 00:40:03.306 [Pipeline] // node 00:40:03.310 [Pipeline] End of Pipeline 00:40:03.348 Finished: SUCCESS